Updates from: 04/14/2023 01:23:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Call Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md
In this article, you'll learn how to:
## Scenario overview
-In [Create branching in user journey by using Azure AD B2C custom policies](custom-policies-series-branch-user-journey.md), users who select *Personal Account* need to provide a valid invitation access code to proceed. We use a static access code, but real world apps don't work this way. If the service that issues the access codes is external to your custom policy, you must make a call to that service, and pass the access code input by the user for validation. If the access code is valid, the service returns an HTTP 200 (OK) response, and Azure AD B2C issues JWT token. Otherwise, the service returns an HTTP 409 (Conflict) response, and the use must re-enter an access code.
+In [Create branching in user journey by using Azure AD B2C custom policies](custom-policies-series-branch-user-journey.md), users who select *Personal Account* need to provide a valid invitation access code to proceed. We use a static access code, but real world apps don't work this way. If the service that issues the access codes is external to your custom policy, you must make a call to that service, and pass the access code input by the user for validation. If the access code is valid, the service returns an HTTP 200 (OK) response, and Azure AD B2C issues JWT token. Otherwise, the service returns an HTTP 409 (Conflict) response, and the user must re-enter an access code.
:::image type="content" source="media/custom-policies-series-call-rest-api/screenshot-of-call-rest-api-call.png" alt-text="A flowchart of calling a R E S T A P I.":::
Next, learn:
- About [RESTful technical profile](restful-technical-profile.md). -- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md)
+- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md)
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
originalUserPrincipalName = alias_theirdomain#EXT#@yourdomain
## Provisioning cycles: Initial and incremental
-When Azure AD is the source system, the provisioning service uses the [Use delta query to track changes in Microsoft Graph data](/graph/delta-query-overview) to monitor users and groups. The provisioning service runs an initial cycle against the source system and target system, followed by periodic incremental cycles.
+When Azure AD is the source system, the provisioning service uses the [delta query to track changes in Microsoft Graph data](/graph/delta-query-overview) to monitor users and groups. The provisioning service runs an initial cycle against the source system and target system, followed by periodic incremental cycles.
### Initial cycle
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 04/11/2023 Last updated : 04/13/2023
This article uses the following terms:
* Target system - The repository of users that the Azure AD provisions to. The Target system is typically a SaaS application such as ServiceNow, Zscaler, and Slack. The target system can also be an on-premises system such as AD.
-* [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview) - An open standard that allows for the automation of user provisioning. SCIM communicates user identity data between identity providers such as Microsoft, and service providers like Salesforce or other SaaS apps that require user identity information.
+* [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview) - An open standard that allows for the automation of user provisioning. SCIM communicates user identity data between identity providers and service providers. Microsoft is an example of an identity provider. Salesforce is an example of a service provider. Service providers require user identity information and an identity provider fulfills that need. SCIM is the mechanism the identity provider and service provider use to send information back and forth.
### Training resources
When technology projects fail, it's typically because of mismatched expectations
### Plan communications
-Communication is critical to the success of any new service. Proactively communicate with your users how their experience will change, when it will change, and how to gain support if they experience issues.
+Communication is critical to the success of any new service. Proactively communicate to your users about their experience, how the experience is changing, when to expect any change, and how to gain support if they experience issues.
### Plan a pilot
A pilot allows you to test with a small group before deploying a capability for
In your first wave, target IT, usability, and other appropriate users who can test and provide feedback. Use this feedback to further develop the communications and instructions you send to your users, and to give insights into the types of issues your support staff may see.
-Widen the rollout to larger groups of users by increasing the scope of the group(s) targeted. This can be done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
+Widen the rollout to larger groups of users by increasing the scope of the group(s) targeted. Increasing the scope of the group(s) is done through [dynamic group membership](../enterprise-users/groups-dynamic-membership.md), or by manually adding users to the targeted group(s).
## Plan application connections and administration
Use the Azure portal to view and manage all the applications that support provis
The actual steps required to enable and configure automatic provisioning vary depending on the application. If the application you wish to automatically provision is listed in the [Azure AD SaaS app gallery](../saas-apps/tutorial-list.md), then you should select the [app-specific integration tutorial](../saas-apps/tutorial-list.md) to configure its pre-integrated user provisioning connector.
-If not, follow the steps below:
+If not, follow the steps:
-1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team will work with you and the application developer to onboard your application to our platform if it supports SCIM.
+1. [Create a request](../manage-apps/v2-howto-app-gallery-listing.md) for a pre-integrated user provisioning connector. Our team works with you and the application developer to onboard your application to our platform if it supports SCIM.
-1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. This is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector.
+1. Use the [BYOA SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md) generic user provisioning support for the app. Using SCIM is a requirement for Azure AD to provision users to the app without a pre-integrated provisioning connector.
1. If the application is able to utilize the BYOA SCIM connector, then refer to [BYOA SCIM integration tutorial](../app-provisioning/use-scim-to-provision-users-and-groups.md) to configure the BYOA SCIM connector for the application.
For more information, see [What applications and systems can I use with Azure AD
Setting up automatic user provisioning is a per-application process. For each application, you need to provide [administrator credentials](../app-provisioning/configure-automatic-user-provisioning-portal.md) to connect to the target systemΓÇÖs user management endpoint.
-The image below shows one version of the required admin credentials:
+The image shows one version of the required admin credentials:
![Provisioning screen to manage user account provisioning settings](./media/plan-auto-user-provisioning/userprovisioning-admincredentials.png)
Before implementing automatic user provisioning, you must determine the users an
* Use [scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) to define attribute-based rules that determine which users are provisioned to an application.
-* Next, use [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) as needed for additional filtering.
+* Next, use [user and group assignments](../manage-apps/assign-user-or-group-access-portal.md) as needed for more filtering.
### Define user and group attribute mapping To implement automatic user provisioning, you need to define the user and group attributes that are needed for the application. There's a pre-configured set of attributes and [attribute-mappings](../app-provisioning/configure-automatic-user-provisioning-portal.md) between Azure AD user objects, and each SaaS applicationΓÇÖs user objects. Not all SaaS apps enable group attributes.
-Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control of what will be populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD.
+Azure AD supports by direct attribute-to-attribute mapping, providing constant values, or [writing expressions for attribute mappings](../app-provisioning/functions-for-customizing-application-data.md). This flexibility gives you fine control over what is populated in the targeted system's attribute. You can use [Microsoft Graph API](../app-provisioning/export-import-provisioning-configuration.md) and Graph Explorer to export your user provisioning attribute mappings and schema to a JSON file and import it back into Azure AD.
For more information, see [Customizing User Provisioning Attribute-Mappings for SaaS Applications in Azure Active Directory](../app-provisioning/customize-application-attributes.md).
At each stage of your deployment ensure that youΓÇÖre testing that results are a
### Plan testing
-Once you have configured automatic user provisioning for the application, you'll run test cases to verify this solution meets your organizationΓÇÖs requirements.
+First, configure automatic user provisioning for the application. Then run test cases to verify the solution meets your organizationΓÇÖs requirements.
| Scenarios| Expected results | | - | - |
It's common for a security review to be required as part of a deployment. If you
### Plan rollback
-If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps below can assist you in reverting to a previous known good state:
+If the automatic user provisioning implementation fails to work as desired in the production environment, the following rollback steps can assist you in reverting to a previous known good state:
1. Review the [provisioning logs](../app-provisioning/check-status-user-account-provisioning.md) to determine what incorrect operations occurred on the affected users and/or groups.
After a successful [initial cycle](../app-provisioning/user-provisioning.md), th
* A new initial cycle is triggered by a change in attribute mappings or scoping filters.
-* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks when it will be automatically disabled.
+* The provisioning process goes into quarantine due to a high error rate and stays in quarantine for more than four weeks then it is automatically disabled.
To review these events, and all other activities performed by the provisioning service, refer to Azure AD [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context).
active-directory Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/feature-availability.md
Previously updated : 01/29/2023 Last updated : 04/13/2023
This following tables list Azure AD feature availability in Azure Government.
|| Session lifetime management | ✅ | || Identity Protection (vulnerabilities and risky accounts) | See [Identity protection](#identity-protection) below. | || Identity Protection (risk events investigation, SIEM connectivity) | See [Identity protection](#identity-protection) below. |
-|| Entra permissions management | ❌ |
|**Administration and hybrid identity**|User and group management | ✅ | || Advanced group management (Dynamic groups, naming policies, expiration, default classification) | ✅ | || Directory synchronizationΓÇöAzure AD Connect (sync and cloud sync) | ✅ |
This following tables list Azure AD feature availability in Azure Government.
|| Global password protection and management ΓÇô cloud-only users | ✅ | || Global password protection and management ΓÇô custom banned passwords, users synchronized from on-premises Active Directory | ✅ | || Microsoft Identity Manager user client access license (CAL) | ✅ |
-|| Entra workload identities | ❌ |
|**End-user self-service**|Application launch portal (My Apps) | ✅ | || User application collections in My Apps | ✅ | || Self-service account management portal (My Account) | ✅ |
This following tables list Azure AD feature availability in Azure Government.
|| Access certifications and reviews | ✅ | || Entitlement management | ✅ | || Privileged Identity Management (PIM), just-in-time access | ✅ |
-|| Entra governance | ❌ |
|**Event logging and reporting**|Basic security and usage reports | ✅ | || Advanced security and usage reports | ✅ | || Identity Protection: vulnerabilities and risky accounts | ✅ |
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
As part of enrolling users to use Microsoft Authenticator as a second factor, we
Microsoft Identity Manager (MIM) SSPR can use MFA Server to invoke SMS one-time passcodes as part of the password reset flow. MIM can't be configured to use Azure AD Multi-Factor Authentication. We recommend you evaluate moving your SSPR service to Azure AD SSPR.- You can use the opportunity of users registering for Azure AD Multi-Factor Authentication to use the combined registration experience to register for Azure AD SSPR.
+If you can't move your SSPR service, or you leverage MFA Server to invoke MFA requests for Privileged Access Management (PAM) scenarios, we recommend you update to an [alternate 3rd party MFA option](https://learn.microsoft.com/microsoft-identity-manager/working-with-custommfaserver-for-mim).
+ ### RADIUS clients and Azure AD Multi-Factor Authentication MFA Server supports RADIUS to invoke multifactor authentication for applications and network devices that support the protocol.
active-directory Howto Mfa App Passwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-app-passwords.md
By default, users can't create app passwords. The app passwords feature must be
When users complete their initial registration for Azure AD Multi-Factor Authentication, there's an option to create app passwords at the end of the registration process.
-Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resources:
-* [What are app passwords in Azure AD Multi-Factor Authentication?](https://support.microsoft.com/account-billing/manage-app-passwords-for-two-step-verification-d6dc8c6d-4bf7-4851-ad95-6d07799387e9)
+Users can also create app passwords after registration. For more information and detailed steps for your users, see the following resource:
* [Create app passwords from the Security info page](https://support.microsoft.com/account-billing/create-app-passwords-from-the-security-info-preview-page-d8bc744a-ce3f-4d4d-89c9-eb38ab9d4137) ## Next steps
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
To quickly see SSPR in action and then come back to understand additional deploy
> [!div class="nextstepaction"] > [Enable self-service password reset (SSPR)](tutorial-enable-sspr.md)
+> [!TIP]
+> As a companion to this article, we recommend using the [Plan your self-service password reset deployment guide](https://go.microsoft.com/fwlink/?linkid=2221501) when signed in to the Microsoft 365 Admin Center. This guide will customize your experience based on your environment. To review best practices without signing in and activating automated setup features, go to the [M365 Setup portal](https://go.microsoft.com/fwlink/?linkid=2221600).
+ ## Learn about SSPR Learn more about SSPR. See [How it works: Azure AD self-service password reset](./concept-sspr-howitworks.md).
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
This message indicates that either the application user closed the dialog that d
```powershell if (-not (Get-AppxPackage Microsoft.AccountsControl)) { Add-AppxPackage -Register "$env:windir\SystemApps\Microsoft.AccountsControl_cw5n1h2txyewy\AppxManifest.xml" -DisableDevelopmentMode -ForceApplicationShutdown } Get-AppxPackage Microsoft.AccountsControl ```
+### "MsalClientException: ErrorCode: wam_runtime_init_failed" error message during Single-file deployment
+
+You may see the following error when packaging your application into a [single file bundle](/dotnet/core/deploying/single-file/overview).
+
+```
+MsalClientException: wam_runtime_init_failed: The type initializer for 'Microsoft.Identity.Client.NativeInterop.API' threw an exception. See https://aka.ms/msal-net-wam#troubleshooting
+```
+
+This error indicates that the native binaries from the [Microsoft.Identity.Client.NativeInterop](https://www.nuget.org/packages/Microsoft.Identity.Client.NativeInterop/) were not packaged into the single file bundle. To embed those files for extraction and get one output file, set the property IncludeNativeLibrariesForSelfExtract to true. Read more about [how to package native binaries into a single file](/dotnet/core/deploying/single-file/overview?tabs=cli#native-libraries).
+
+### Connection issues
+
+The application user sees an error message similar to "Please check your connection and try again." If this issue occurs regularly, see the [troubleshooting guide for Office](/microsoft-365/troubleshoot/authentication/connection-issue-when-sign-in-office-2016), which also uses the broker.
+ ## Sample
active-directory Web App Quickstart Portal Node Js Ciam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-ciam.md
+
+ Title: "Quickstart: Add sign in to a React SPA"
+description: Learn how to run a sample React SPA to sign in users
+++++++++ Last updated : 04/12/2023++
+# Portal quickstart for React SPA
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a React single-page application (SPA) can sign in users with Azure AD CIAM.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> ## Download the code
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-ciam-javascript-tutorial/archive/react-quickstart.zip)
+>
+> ## Run the sample
+>
+> 1. Unzip the downloaded file.
+>
+> 1. Locate the folder that contains the `package.json` file in your terminal, then run the following command:
+>
+> ```console
+> npm install && npm start
+> ```
+>
+> 1. Open your browser and visit `http://locahost:3000`.
+>
+> 1. Select the **Sign-in** link on the navigation bar.
+>
active-directory Whats Deprecated Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md
Last updated 01/27/2023 -+
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:| |Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023|
-|Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023|
+|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
+|[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
+|[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|On GA|
+|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023|
+|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
+|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Deprecation|Jun 30, 2023|
+|[My Apps improvements](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jun 30, 2023|
+|[Terms of Use experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Jul 2023|
+|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024|
+|[Legacy MFA & SSPR policy](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Sep 30, 2024|
+|[ΓÇÿRequire approved client appΓÇÖ Conditional Access Grant](https://aka.ms/RetireApprovedClientApp)|Retirement|Mar 31, 2026|
++
+## Past changes
+
+|Functionality, feature, or service|Change|Change date |
+|||:|
+|[Azure AD Domain Services virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023|
|[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023|
-|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|
-|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023|
-|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023|
-|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Sep 30, 2024|
\* The legacy license management API and PowerShell cmdlets will not work for **new tenants** created after Nov 1, 2022.
Use the definitions in this section help clarify the state, availability, and su
|Category|Definition|Communication schedule| ||||
-|Deprecation|The state of a feature, functionality, or service no longer in active development. A deprecated feature might be retired and removed from future releases.|2 times per year: March and September|
-|Retirement|Signals retirement in a specified period. Customers canΓÇÖt adopt the service or feature, and engineering investments are reduced. Later, the feature reaches end-of-life and is unavailable to any customer.|2 times per year: March and September|
+|Retirement|Signals retirement of a feature, capability, or product in a specified period. Customers canΓÇÖt adopt the service or feature, and engineering investments are reduced. Later, the feature reaches end-of-life and is unavailable to any customer.|2 times per year: March and September|
|Breaking change|A change that might break the customer or partner experience if action isnΓÇÖt taken, or a change made, for continued operation.|4 times per year: March, June, September, and December|
-|Feature change|Change to an IDNA feature that requires no customer action, but is noticeable to them. Typically, these changes are in the user interface/user experperience (UI/UX).|4 times per year: March, June, September, and December|
-|Rebranding|A new name, term, symbol, design, concept or combination thereof for an established brand to develop a differentiated experience.|As scheduled or announced|
+|Feature change|Change to an existing Identity feature that requires no customer action, but is noticeable to them. Typically, these changes are in the user interface/user experperience (UI/UX).|4 times per year: March, June, September, and December|
### Terminology
active-directory Whats New Sovereign Clouds Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds-archive.md
+
+ Title: Archive for What's new in Sovereign Clouds?
+description: The What's new in sovereign cloud release notes in the Overview section of this content set contain six months of activity. After six months, the items are removed from the main article and put into this archive article for the next two years.
+++++++ Last updated : 4/13/2023++++
+# Archive for What's new in Azure Sovereign Clouds?
+
+The primary [What's new in sovereign clouds release notes](whats-new-sovereign-clouds.md) article contains updates for the last six months, while this article contains older information up to two years.
+++++
+## September 2022
++
+### General Availability - No more waiting, provision groups on demand into your SaaS applications.
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Identity Lifecycle Management
+
+
+Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md).
+
++
+### General Availability - Devices Overview
+
+**Type:** New feature
+**Service category:** Device Registration and Management
+**Product capability:** Device Lifecycle Management
+
+
+
+The new Device Overview in the Azure portal provides meaningful and actionable insights about devices in your tenant.
+
+In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. For more information, see: [Manage device identities by using the Azure portal](../devices/device-management-azure-portal.md).
+
++
+### General Availability - Support for Linux as Device Platform in Azure AD Conditional Access
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** User Authentication
+
+
+
+Added support for ΓÇ£LinuxΓÇ¥ device platform in Azure AD Conditional Access.
+
+An admin can now require a user is on a compliant Linux device, managed by Intune, to sign-in to a selected service (for example ΓÇÿall cloud appsΓÇÖ or ΓÇÿOffice 365ΓÇÖ). For more information, see: [Device platforms](../conditional-access/concept-conditional-access-conditions.md#device-platforms)
+
++
+### General Availability - Cross-tenant access settings for B2B collaboration
+
+**Type:** Changed feature
+**Service category:** B2B
+**Product capability:** B2B/B2C
+
+
+
+Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
+
++
+### General Availability - Location Aware Authentication using GPS from Authenticator App
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+
+Admins can now enforce Conditional Access policies based off of GPS location from Authenticator. For more information, see: [Named locations](../conditional-access/location-condition.md#named-locations).
+
++
+### General Availability - My Sign-ins now supports org switching and improved navigation
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** End User Experiences
+
+
+
+We've improved the My Sign-ins experience to now support organization switching. Now users who are guests in other tenants can easily switch and sign-in to manage their security info and view activity. More improvements were made to make it easier to switch from My Sign-ins directly to other end user portals such as My Account, My Apps, My Groups, and My Access. For more information, see: [Sign-in logs in Azure Active Directory - preview](../reports-monitoring/concept-all-sign-ins.md)
+
++
+### General Availability - Temporary Access Pass is now available
+
+**Type:** New feature
+**Service category:** MFA
+**Product capability:** User Authentication
+
+
+
+Temporary Access Pass (TAP) is now generally available. TAP can be used to securely register password-less methods such as Phone Sign-in, phishing resistant methods such as FIDO2, and even help Windows onboarding (AADJ and WHFB). TAP also makes recovery easier when a user has lost or forgotten their strong authentication methods and needs to sign in to register new authentication methods. For more information, see: [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md).
+
++
+### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+
+
+In some scenarios customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency Every time support requiring a user to reauthenticate during Intune device enrollment, password change for risky users and risky sign-ins.
+
+More information: [Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time).
+
++
+### General Availability - Non-interactive risky sign-ins
+
+**Type:** Changed feature
+**Service category:** Identity Protection
+**Product capability:** Identity Security & Protection
+
+
+
+Identity Protection now emits risk (such as unfamiliar sign-in properties) on non-interactive sign-ins. Admins can now find these non-interactive risky sign-ins using the "sign-in type" filter in the Risky sign-ins report. For more information, see: [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
+++
+
+### General Availability - Workload Identity Federation with App Registrations are available now
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Developer Experience
+
+
+
+Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials.
+
+For more information on this capability and supported scenarios, see: [Workload identity federation](../develop/workload-identity-federation.md).
+
+++
+### General Availability - Continuous Access Evaluation
+
+**Type:** New feature
+**Service category:** Other
+**Product capability:** Access Control
+
+
+
+With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. For more information, see: [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md)
+
+++
+### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
+
+**Type:** New feature
+**Service category:** MS Graph
+**Product capability:** Identity Security & Protection
++
+We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
+
+We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
+
+
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
- [Azure Government](../../azure-government/documentation-government-welcome.md)
-This page is updated monthly, so revisit it regularly.
+This page updates monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Sovereign Clouds](whats-new-archive.md).
+
+## March 2023
+
+### General Availability - Provisioning Insights Workbook
+
+**Type:** New feature
+**Service category:** Provisioning
+**Product capability:** Monitoring & Reporting
+
+This new workbook makes it easier to investigate and gain insights into your provisioning workflows in a given tenant. This includes HR-driven provisioning, cloud sync, app provisioning, and cross-tenant sync.
+
+Some key questions this workbook can help answer are:
+
+- How many identities have been synced in a given time range?
+- How many create, delete, update, or other operations were performed?
+- How many operations were successful, skipped, or failed?
+- What specific identities failed? And what step did they fail on?
+- For any given user, what tenants / applications were they provisioned or deprovisioned to?
+
+For more information, see: [Provisioning insights workbook](../app-provisioning/provisioning-workbook.md).
+++
+### General Availability - Follow Azure Active Directory best practices with recommendations
+
+**Type:** New feature
+**Service category:** Reporting
+**Product capability:** Monitoring & Reporting
+
+Azure Active Directory recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure Active Directory compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure Active Directory Overview.
+
+This release includes our first three recommendations:
+
+- Convert from per-user MFA to Conditional Access MFA
+- Migration applications from AD FS to Azure Active Directory
+- Minimize MFA prompts from known devices.
+
+We're developing more recommendations, so stay tuned!
+
+For more information, see:
+
+- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md).
+- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview)
+++
+### General Availability - Improvements to Azure Active Directory Smart Lockout
+
+**Type:** Changed feature
+**Service category:** Other
+**Product capability:** User Management
+
+With a recent improvement, Smart Lockout now synchronizes the lockout state across Azure Active Directory data centers, so the total number of failed sign-in attempts allowed before an account is locked will match the configured lockout threshold.
+
+For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
+++
+### General Availability- MFA events from ADFS and NPS adapter available in Sign-in logs
+
+**Type:** Changed feature
+**Service category:** MFA
+**Product capability:** Identity Security & Protection
+
+Customers with Cloud MFA activity from ADFS adapter, or NPS Extension, can now see these events in the Sign-in logs, rather than the legacy multi-factor authentication activity report. Not all attributes in the sign-in logs are populated for these events due to limited data from the on-premises components. Customers with ADFS using AD Health Connect and customers using NPS with the latest NPS extension installed will have a richer set of data in the events.
+
+For more information, see: [Protect user accounts from attacks with Azure Active Directory smart lockout](../authentication/howto-password-smart-lockout.md).
++ ## February 2023
Filter and transform group names in token claims configuration using regular exp
**Service category:** Enterprise Apps **Product capability:** SSO
-Azure AD now has the capability to filter the groups included in the token using substring match on the display name or **onPremisesSAMAccountName** attributes of the group object. Only Groups the user is a member of will be included in the token.This was a blocker for some of our customers to migrate their apps from ADFS to Azure AD. This feature will unblock those challenges.
+Azure AD now has the capability to filter the groups included in the token using substring match on the display name or **onPremisesSAMAccountName** attributes of the group object. Only Groups the user is a member of will be included in the token. This was a blocker for some of our customers to migrate their apps from ADFS to Azure AD. This feature will unblock those challenges.
For more information, see: - [Group Filter](../develop/reference-claims-mapping-policy-type.md#group-filter).
Microsoft cloud settings let you collaborate with organizations from different M
- Microsoft Azure commercial and Microsoft Azure Government - Microsoft Azure commercial and Microsoft Azure China 21Vianet
-For more information about Microsoft cloud settings for B2B collaboration., see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
+For more information about Microsoft cloud settings for B2B collaboration, see: [Microsoft cloud settings](../external-identities/cross-tenant-access-overview.md#microsoft-cloud-settings).
We're excited to announce the general availability of hybrid cloud Kerberos trus
**Product capability:** Outbound to SaaS Applications
-Accidental deletion of users in your apps or in your on-premises directory could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability. When a provisioning job would cause a spike in deletions, it will first pause and provide you visibility into the potential deletions. You can then accept or reject the deletions and have time to update the jobΓÇÖs scope if necessary. For more information, see [Understand how expression builder in Application Provisioning works](../app-provisioning/expression-builder.md).
+Accidental deletion of users in your apps or in your on-premises directory could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability. When a provisioning job would cause a spike in deletions, it will first pause and provide you with visibility into the potential deletions. You can then accept or reject the deletions and have time to update the jobΓÇÖs scope if necessary. For more information, see [Understand how expression builder in Application Provisioning works](../app-provisioning/expression-builder.md).
Azure AD Connect Cloud Sync Password writeback now provides customers the abilit
-Accidental deletion of users in any system could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold, the Azure AD provisioning service will pause, provide you visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning.
+Accidental deletion of users in any system could be disastrous. WeΓÇÖre excited to announce the general availability of the accidental deletions prevention capability as part of the Azure AD provisioning service. When the number of deletions to be processed in a single provisioning cycle spikes above a customer defined threshold, the Azure AD provisioning service will pause, provide you with visibility into the potential deletions, and allow you to accept or reject the deletions. This functionality has historically been available for Azure AD Connect, and Azure AD Connect Cloud Sync. It's now available across the various provisioning flows, including both HR-driven provisioning and application provisioning.
For more information, see: [Enable accidental deletions prevention in the Azure AD provisioning service](../app-provisioning/accidental-deletions.md)
For more information on how to use this feature, see: [Dynamic membership rule f
-## September 2022
--
-### General Availability - No more waiting, provision groups on demand into your SaaS applications.
-
-**Type:** New feature
-**Service category:** Provisioning
-**Product capability:** Identity Lifecycle Management
-
-
-Pick a group of up to five members and provision them into your third-party applications in seconds. Get started testing, troubleshooting, and provisioning to non-Microsoft applications such as ServiceNow, ZScaler, and Adobe. For more information, see: [On-demand provisioning in Azure Active Directory](../app-provisioning/provision-on-demand.md).
-
--
-### General Availability - Devices Overview
-
-**Type:** New feature
-**Service category:** Device Registration and Management
-**Product capability:** Device Lifecycle Management
-
-
-
-The new Device Overview in the Azure portal provides meaningful and actionable insights about devices in your tenant.
-
-In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. For more information, see: [Manage device identities by using the Azure portal](../devices/device-management-azure-portal.md).
-
--
-### General Availability - Support for Linux as Device Platform in Azure AD Conditional Access
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** User Authentication
-
-
-
-Added support for ΓÇ£LinuxΓÇ¥ device platform in Azure AD Conditional Access.
-
-An admin can now require a user is on a compliant Linux device, managed by Intune, to sign-in to a selected service (for example ΓÇÿall cloud appsΓÇÖ or ΓÇÿOffice 365ΓÇÖ). For more information, see: [Device platforms](../conditional-access/concept-conditional-access-conditions.md#device-platforms)
-
--
-### General Availability - Cross-tenant access settings for B2B collaboration
-
-**Type:** Changed feature
-**Service category:** B2B
-**Product capability:** B2B/B2C
-
-
-
-Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now youΓÇÖll have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. For more information, see: [Cross-tenant access with Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
-
--
-### General Availability - Location Aware Authentication using GPS from Authenticator App
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-
-
-Admins can now enforce Conditional Access policies based off of GPS location from Authenticator. For more information, see: [Named locations](../conditional-access/location-condition.md#named-locations).
-
--
-### General Availability - My Sign-ins now supports org switching and improved navigation
-
-**Type:** Changed feature
-**Service category:** MFA
-**Product capability:** End User Experiences
-
-
-
-We've improved the My Sign-ins experience to now support organization switching. Now users who are guests in other tenants can easily switch and sign-in to manage their security info and view activity. More improvements were made to make it easier to switch from My Sign-ins directly to other end user portals such as My Account, My Apps, My Groups, and My Access. For more information, see: [Sign-in logs in Azure Active Directory - preview](../reports-monitoring/concept-all-sign-ins.md)
-
--
-### General Availability - Temporary Access Pass is now available
-
-**Type:** New feature
-**Service category:** MFA
-**Product capability:** User Authentication
-
-
-
-Temporary Access Pass (TAP) is now generally available. TAP can be used to securely register password-less methods such as Phone Sign-in, phishing resistant methods such as FIDO2, and even help Windows onboarding (AADJ and WHFB). TAP also makes recovery easier when a user has lost or forgotten their strong authentication methods and needs to sign in to register new authentication methods. For more information, see: [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods](../authentication/howto-authentication-temporary-access-pass.md).
-
--
-### General Availability - Ability to force reauthentication on Intune enrollment, risky sign-ins, and risky users
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-
-
-In some scenarios customers may want to require a fresh authentication, every time before a user performs specific actions. Sign-in frequency Every time support requiring a user to reauthenticate during Intune device enrollment, password change for risky users and risky sign-ins.
-
-More information: [Configure authentication session management](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time).
-
--
-### General Availability - Non-interactive risky sign-ins
-
-**Type:** Changed feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-
-
-Identity Protection now emits risk (such as unfamiliar sign-in properties) on non-interactive sign-ins. Admins can now find these non-interactive risky sign-ins using the "sign-in type" filter in the Risky sign-ins report. For more information, see: [How To: Investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md).
---
-
-### General Availability - Workload Identity Federation with App Registrations are available now
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** Developer Experience
-
-
-
-Entra Workload Identity Federation allows developers to exchange tokens issued by another identity provider with Azure AD tokens, without needing secrets. It eliminates the need to store, and manage, credentials inside the code or secret stores to access Azure AD protected resources such as Azure and Microsoft Graph. By removing the secrets required to access Azure AD protected resources, workload identity federation can improve the security posture of your organization. This feature also reduces the burden of secret management and minimizes the risk of service downtime due to expired credentials.
-
-For more information on this capability and supported scenarios, see: [Workload identity federation](../develop/workload-identity-federation.md).
-
---
-### General Availability - Continuous Access Evaluation
-
-**Type:** New feature
-**Service category:** Other
-**Product capability:** Access Control
-
-
-
-With Continuous access evaluation (CAE), critical security events and policies are evaluated in real time. This includes account disable, password reset, and location change. For more information, see: [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md)
-
---
-### Public Preview ΓÇô Protect against by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD
-
-**Type:** New feature
-**Service category:** MS Graph
-**Product capability:** Identity Security & Protection
--
-We're delighted to announce a new security protection that prevents bypassing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD. When enabled for a federated domain in your Azure AD tenant, it ensures that a compromised federated account can't bypass Azure AD Multi-Factor Authentication by imitating that a multi factor authentication has already been performed by the identity provider. The protection can be enabled via new security setting, [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-beta#federatedidpmfabehavior-values&preserve-view=true).
-
-We highly recommend enabling this new protection when using Azure AD Multi-Factor Authentication as your multi factor authentication for your federated users. To learn more about the protection and how to enable it, visit [Enable protection to prevent by-passing of cloud Azure AD Multi-Factor Authentication when federated with Azure AD](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#enable-protection-to-prevent-by-passing-of-cloud-azure-ad-multi-factor-authentication-when-federated-with-azure-ad).
-
--- ## Next steps <!-- Add a context sentence for the following links --> - [What's new in Azure Active Directory?](whats-new.md)-- [Archive for What's new in Azure Active Directory?](whats-new-archive.md)
+- [Archive for What's new in Azure Active Directory?](whats-new-archive.md)
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph the parameters for the **Add user to teams** task are as fol
### Enable user account
-Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be enabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/enable-task.png" alt-text="Screenshot of Workflows task: enable user account.":::
For more information on setting up a Logic app to run with Lifecycle Workflows,
### Disable user account
-Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You're able to customize the task name and description for this task in the Azure portal.
+Allows cloud-only user accounts to be disabled. Users with Azure AD role assignments are not supported, nor are users with membership or ownership of role-assignable groups. You can utilize Azure Active Directory's HR driven provisioning to on-premises Active Directory to disable and enable synchronized accounts with an attribute mapping to `accountDisabled` based on data from your HR source. For more information, see: [Workday Configure attribute mappings](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings) and [SuccessFactors Configure attribute mappings](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md#part-4-configure-attribute-mappings). You're able to customize the task name and description for this task in the Azure portal.
:::image type="content" source="media/lifecycle-workflow-task/disable-task.png" alt-text="Screenshot of Workflows task: disable user account.":::
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
The following details relate to the `lastSignInDateTime` property.
- To read the property, you need to grant the app the following Microsoft Graph permissions: - AuditLog.Read.All
- - Directory.Read.All
- User.Read.All - Each interactive sign-in that was successful results in an update of the underlying data store. Typically, successful sign-ins show up in the related sign-in report within 10 minutes.
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md
Azure AD provides multiple options for assigning roles:
## License requirements
-Using built-in roles in Azure AD is free, while custom roles require an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+Using built-in roles in Azure AD is free. Using custom roles require an Azure AD Premium P1 license for every user with a custom role assignment. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
## Next steps
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> | Create user | [User Administrator](permissions-reference.md#user-administrator) | | > | Delete users | [User Administrator](permissions-reference.md#user-administrator) | | > | Invalidate refresh tokens of limited admins | [User Administrator](permissions-reference.md#user-administrator) | |
-> | Invalidate refresh tokens of non-admins | [Password Administrator](permissions-reference.md#password-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
+> | Invalidate refresh tokens of non-admins | [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | [User Administrator](permissions-reference.md#user-administrator) |
> | Invalidate refresh tokens of privileged admins | [Privileged Authentication Administrator](permissions-reference.md#privileged-authentication-administrator) | | > | Read basic configuration | [Default user role](../fundamentals/users-default-permissions.md) | | > | Reset password for limited admins | [User Administrator](permissions-reference.md#user-administrator) | |
active-directory Protected Actions Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-add.md
+
+ Title: Add, test, or remove protected actions in Azure AD (preview)
+description: Learn how to add, test, or remove protected actions in Azure Active Directory.
++++++++ Last updated : 04/10/2022++
+# Add, test, or remove protected actions in Azure AD (preview)
+
+> [!IMPORTANT]
+> Protected actions are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+[Protected actions](./protected-actions-overview.md) in Azure Active Directory (Azure AD) are permissions that have been assigned Conditional Access polices that are enforced when a user attempts to perform an action. This article describes how to add, test, or remove protected actions.
+
+## Prerequisites
+
+To add or remove protected actions, you must have:
+
+- Azure AD Premium P1 or P2 license
+- [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) role
+
+## Configure Conditional Access policy
+
+Protected actions use a Conditional Access authentication context, so you must configure an authentication context and add it to a Conditional Access policy. If you already have a policy with an authentication context, you can skip to the next section.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).
+
+1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access** > **Authentication context** > **Authentication context**.
+
+1. Select **New authentication context** to open the **Add authentication context** pane.
+
+1. Enter a name and description and then select **Save**.
+
+ :::image type="content" source="media/protected-actions-add/authentication-context-add.png" alt-text="Screenshot of Add authentication context pane to add a new authentication context." lightbox="media/protected-actions-add/authentication-context-add.png":::
+
+1. Select **Policies** > **New policy** to create a new policy.
+
+1. Create a new policy and select your authentication context.
+
+ For more information, see [Conditional Access: Cloud apps, actions, and authentication context](../conditional-access/concept-conditional-access-cloud-apps.md).
+
+ :::image type="content" source="media/protected-actions-add/policy-authentication-context.png" alt-text="Screenshot of New policy page to create a new policy with an authentication context." lightbox="media/protected-actions-add/policy-authentication-context.png":::
+
+## Add protected actions
+
+To add protection actions, assign a Conditional Access policy to one or more permissions using a Conditional Access authentication context.
+
+1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**.
+
+ :::image type="content" source="media/protected-actions-add/protected-actions-start.png" alt-text="Screenshot of Add protected actions page in Roles and administrators." lightbox="media/protected-actions-add/protected-actions-start.png":::
+
+1. Select **Add protected actions** to add a new protected action.
+
+ If **Add protected actions** is disabled, make sure you're assigned the Conditional Access Administrator or Security Administrator role. For more information, see [Troubleshoot protected actions](#troubleshoot-protected-actions).
+
+1. Select a configured Conditional Access authentication context.
+
+1. Select **Select permissions** and select the permissions to protect with Conditional Access.
+
+ :::image type="content" source="media/protected-actions-add/permissions-select.png" alt-text="Screenshot of Add protected actions page with permissions selected." lightbox="media/protected-actions-add/permissions-select.png":::
+
+1. Select **Add**.
+
+1. When finished, select **Save**.
+
+ The new protected actions appear in the list of protected actions
+
+## Test protected actions
+
+When a user performs a protected action, they'll need to satisfy Conditional Access policy requirements. This section shows the experience for a user being prompted to satisfy a policy. In this example, the user is required to authenticate with a FIDO security key before they can update Conditional Access policies.
+
+1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that must satisfy the policy.
+
+1. Select **Azure Active Directory** > **Protect & secure** > **Conditional Access**.
+
+1. Select a Conditional Access policy to view it.
+
+ Policy editing is disabled because the authentication requirements haven't been satisfied. At the bottom of the page is the following note:
+
+ Editing is protected by an additional access requirement. Click here to reauthenticate.
+
+ :::image type="content" source="media/protected-actions-add/test-policy-reauthenticate.png" alt-text="Screenshot of a disabled Conditional Access policy with a note indicating to reauthenticate." lightbox="media/protected-actions-add/test-policy-reauthenticate.png":::
+
+1. Select **Click here to reauthenticate**.
+
+1. Complete the authentication requirements when the browser is redirected to the Azure AD sign-in page.
+
+ :::image type="content" source="media/protected-actions-add/test-policy-reauthenticate-sign-in.png" alt-text="Screenshot of a sign-in page to reauthenticate." lightbox="media/protected-actions-add/test-policy-reauthenticate-sign-in.png":::
+
+ After completing the authentication requirements, the policy can be edited.
+
+1. Edit the policy and save changes.
+
+ :::image type="content" source="media/protected-actions-add/test-policy-edit.png" alt-text="Screenshot of an enabled Conditional Access policy that can be edited." lightbox="media/protected-actions-add/test-policy-edit.png":::
+
+## Remove protected actions
+
+To remove protection actions, unassign Conditional Access policy requirements from a permission.
+
+1. Select **Azure Active Directory** > **Roles & admins** > **Protected actions (Preview)**.
+
+1. Find and select the permission Conditional Access policy to unassign.
+
+ :::image type="content" source="media/protected-actions-add/permissions-remove.png" alt-text="Screenshot of Protected actions page with permission selected to remove." lightbox="media/protected-actions-add/permissions-remove.png":::
+
+1. On the toolbar, select **Remove**.
+
+ After you remove the protected action, the permission won't have a Conditional Access requirement. A new Conditional Access policy can be assigned to the permission.
+
+## Microsoft Graph
+
+### Add protected actions
+
+Protected actions are added by assigning an authentication context value to a permission. Authentication context values that are available in the tenant can be discovered by calling the [authenticationContextClassReference](/graph/api/resources/authenticationcontextclassreference?branch=main) API.
+
+Authentication context can be assigned to a permission using the [unifiedRbacResourceAction](/graph/api/resources/unifiedrbacresourceaction?branch=main) API beta endpoint:
+
+```http
+https://graph.microsoft.com/beta/roleManagement/directory/resourceNamespaces/microsoft.directory/resourceActions/
+```
+
+The following example shows how to get the authentication context ID that was set on the `microsoft.directory/conditionalAccessPolicies/delete` permission.
+
+```http
+GET https://graph.microsoft.com/beta/roleManagement/directory/resourceNamespaces/microsoft.directory/resourceActions/microsoft.directory-conditionalAccessPolicies-delete-delete?$select=authenticationContextId,isAuthenticationContextSettable
+```
+
+Resource actions with the property `isAuthenticationContextSettable` set to true support authentication context. Resource actions with the value of the property `authenticationContextId` is the authentication context ID that has been assigned to the action.
+
+To view the `isAuthenticationContextSettable` and `authenticationContextId` properties, they must be included in the select statement when making the request to the resource action API.
+
+## Troubleshoot protected actions
+
+### Symptom - No authentication context values can be selected
+
+When attempting to select a Conditional Access authentication context, there are no values available to select.
++
+**Cause**
+
+No Conditional Access authentication context values have been enabled in the tenant.
+
+**Solution**
+
+Enable authentication context for the tenant by adding a new authentication context. Ensure **Publish to apps** is checked, so the value is available to be selected. For more information, see [Authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context).
+
+### Symptom - Policy isn't getting triggered
+
+In some cases, after a protected action has been added, users may not be prompted as expected. For example, if policy requires multifactor authentication, a user may not see a sign-in prompt.
+
+**Cause 1**
+
+The user hasn't been assigned to the Conditional Access policies used for protected action.
+
+**Solution 1**
+
+Use Conditional Access [What If](../conditional-access/troubleshoot-conditional-access-what-if.md) tool to check if the user has been assigned policy. When using the tool, select the user and the authentication context that was used with the protected action. Select What If and verify the expected policy is listed in the **Policies that will apply** table. If the policy doesn't apply, check the policy user assignment condition, and add the user.
+
+**Cause 2**
+
+The user has previously satisfied policy. For example, the completed multifactor authentication earlier in the same session.
+
+**Solution 2**
+
+Check the [Azure AD sign-in events](../conditional-access/troubleshoot-conditional-access.md) to troubleshoot. The sign-in events will include details about the session, including if the user has already completed multifactor authentication. When troubleshooting with the sign-in logs, it's also helpful to check the policy details page, to confirm an authentication context was requested.
+
+### Symptom - No access to add protected actions
+
+When signed in you don't have permissions to add or remove protected actions.
+
+**Cause**
+
+You don't have permission to manage protected actions.
+
+**Solution**
+
+Make sure you're assigned the [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) role.
+
+### Symptom - Error returned using PowerShell to perform a protected action
+
+When using PowerShell to perform a protected action, an error is returned and there's no prompt to satisfy Conditional Access policy.
+
+**Cause**
+
+Microsoft Graph PowerShell supports step-up authentication, which is required to allow policy prompts. Azure and Azure AD Graph PowerShell isn't supported for step-up authentication.
+
+**Solution**
+
+Make sure you're using Microsoft Graph PowerShell.
+
+## Next steps
+
+- [What are protected actions in Azure AD?](protected-actions-overview.md)
+- [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context)
active-directory Protected Actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md
+
+ Title: What are protected actions in Azure AD? (preview)
+description: Learn about protected actions in Azure Active Directory.
++++++++ Last updated : 04/10/2023++
+# What are protected actions in Azure AD? (preview)
+
+> [!IMPORTANT]
+> Protected actions are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Protected actions in Azure Active Directory (Azure AD) are permissions that have been assigned [Conditional Access policies](../conditional-access/overview.md). When a user attempts to perform a protected action, they must first satisfy the Conditional Access policies assigned to the required permissions. For example, to allow administrators to update Conditional Access policies, you can require that they first satisfy the [Phishing-resistant MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths) policy.
+
+This article provides an overview of protected action and how to get started using them.
+
+## Why use protected actions?
+
+You use protected actions when you want to add an additional layer of protection. Protected actions can be applied to permissions that require strong Conditional Access policy protection, independent of the role being used or how the user was given the permission. Because the policy enforcement occurs at the time the user attempts to perform the protected action and not during user sign-in or rule activation, users are prompted only when needed.
+
+## What policies are typically used with protected actions?
+
+We recommend using multi-factor authentication on all accounts, especially accounts with privileged roles. Protected actions can be used to require additional security. Here are some common stronger Conditional Access policies.
+
+- Stronger MFA authentication strengths, such as [Passwordless MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths) or [Phishing-resistant MFA](../authentication/concept-authentication-strengths.md#built-in-authentication-strengths),
+- Privileged access workstations, by using Conditional Access policy [device filters](../conditional-access/concept-condition-filters-for-devices.md).
+- Shorter session timeouts, by using Conditional Access [sign-in frequency session controls](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency).
+
+## What permissions can be used with protected actions?
+
+For this preview, Conditional Access policies can be applied to limited set of permissions. You can use protected actions in the following areas:
+
+- Conditional Access policy management
+- Custom rules that define network locations
+- Protected action management
+
+Here's the initial set of permissions:
+
+> [!div class="mx-tableFixed"]
+> | Permission | Description |
+> | | |
+> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies |
+> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies |
+> | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |
+> | microsoft.directory/namedLocations/create | Create custom rules that define network locations |
+> | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations |
+> | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions |
+
+## How do protected actions compare with Privileged Identity Management role activation?
+
+[Privileged Identity Management role activation](../privileged-identity-management/pim-how-to-change-default-settings.md) can also be assigned Conditional Access policies. This capability allows for policy enforcement only when a user activates a role, providing the most comprehensive protection. Protected actions are enforced only when a user takes an action that requires permissions with Conditional Access policy assigned to it. Protected actions allows for high impact permissions to be protected, independent of a user role. Privileged Identity Management role activation and protected actions can be used together, for the strongest coverage.
+
+## Steps to use protected actions
+
+1. **Check permissions**
+
+ Check that you're assigned the [Conditional Access Administrator](permissions-reference.md#conditional-access-administrator) or [Security Administrator](permissions-reference.md#security-administrator) roles. If not, check with your administrator to assign the appropriate role.
+
+1. **Configure Conditional Access policy**
+
+ Configure a Conditional Access authentication context and an associated Conditional Access policy. Protected actions use an authentication context, which allows policy enforcement for fine-grain resources in a service, like Azure AD permissions. A good policy to start with is to require passwordless MFA and exclude an emergency account. [Learn more](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context)
+
+1. **Add protected actions**
+
+ Add protected actions by assigning Conditional Access authentication context values to selected permissions. [Learn more](./protected-actions-add.md#add-protected-actions)
+
+1. **Test protected actions**
+
+ Sign in as a user and test the user experience by performing the protected action. You should be prompted to satisfy the Conditional Access policy requirements. For example, if the policy requires multi-factor authentication, you should be redirected to the sign-in page and prompted for strong authentication. [Learn more](./protected-actions-add.md#test-protected-actions)
+
+## What happens with protected actions and applications?
+
+If an application or service attempts to perform a protection action, it must be able to handle the required Conditional Access policy. In some cases, a user might need to intervene and satisfy the policy. For example, they may be required to complete multi-factor authentication. In this preview, the following applications support step-up authentication for protected actions:
+
+- Azure Active Directory administrator experiences for the actions in the [Entra admin center](https://entra.microsoft.com) or [Azure portal](https://portal.azure.com)
+- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview?branch=main)
+- [Microsoft Graph Explorer](/graph/graph-explorer/graph-explorer-overview?branch=main)
+
+There are some known and expected limitations. The following applications will fail if they attempt to perform a protected action.
+
+- [Azure PowerShell](/powershell/azure/what-is-azure-powershell?branch=main)
+- [Azure AD PowerShell](/powershell/azure/active-directory/overview?branch=main)
+- Creating a new [terms of use](../conditional-access/terms-of-use.md) page or [custom control](../conditional-access/controls.md) in the Entra admin center or Azure portal. New terms of use pages or custom controls are registered with Conditional Access so are subject to Conditional Access create, update, and delete protected actions. Temporarily removing the policy requirement from the Conditional Access create, update, and delete actions will allow the creation of a new terms of use page or custom control.
+
+If your organization has developed an application that calls the Microsoft Graph API to perform a protected action, you should review the code sample for how to handle a claims challenge using step-up authentication. For more information, see [Developer guide to Conditional Access authentication context](../develop/developer-guide-conditional-access-authentication-context.md).
+
+## Best practices
+
+Here are some best practices for using protected actions.
+
+- **Have an emergency account**
+
+ When configuring Conditional Access policies for protected actions, be sure to have an emergency account that is excluded from the policy. This provides a mitigation against accidental lockout.
+
+- **Move user and sign-in risk policies to Conditional Access**
+
+ Conditional Access permissions aren't used when managing Azure AD Identity Protection risk policies. We recommend moving user and sign-in risk policies to Conditional Access.
+
+- **Use named network locations**
+
+ Named network location permissions aren't used when managing multi-factor authentication trusted IPs. We recommend using [named network locations](../conditional-access/location-condition.md#named-locations).
+
+- **Don't use protected actions to block access based on identity or group membership**
+
+ Protected actions are used to apply an access requirement to perform a protected action. They aren't intended to block use of a permission just based on user identity or group membership. Who has access to specific permissions is an authorization decision and should be controlled by role assignment.
+
+## License requirements
++
+## Next steps
+
+- [Add, test, or remove protected actions in Azure AD](./protected-actions-add.md)
active-directory Alinto Protect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md
Title: 'Tutorial: Configure Alinto Protect for automatic user provisioning with Azure Active Directory'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to Alinto Protect.
+ Title: 'Tutorial: Configure Cleanmail for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Cleanmail.
writer: twimmers
Last updated 11/21/2022
-# Tutorial: Configure Alinto Protect for automatic user provisioning
+# Tutorial: Configure Cleanmail for automatic user provisioning
-This tutorial describes the steps you need to do in both Alinto Protect and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Alinto Protect](https://www.alinto.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to do in both Cleanmail and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Cleanmail](https://www.alinto.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities supported > [!div class="checklist"]
-> * Create users in Alinto Protect
-> * Remove users in Alinto Protect when they do not require access anymore
-> * Keep user attributes synchronized between Azure AD and Alinto Protect
-> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Alinto Protect (recommended).
+> * Create users in Cleanmail
+> * Remove users in Cleanmail when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Cleanmail
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Cleanmail (recommended).
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A user account in Alinto Protect with Admin permission
+* A user account in Cleanmail with Admin permission
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. Determine what data to [map between Azure AD and Alinto Protect](../app-provisioning/customize-application-attributes.md).
+1. Determine what data to [map between Azure AD and Cleanmail](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure Alinto Protect to support provisioning with Azure AD
+## Step 2. Configure Cleanmail to support provisioning with Azure AD
-Contact [Alinto Protect Support](https://www.alinto.com/contact-email-provider/) to configure Alinto to support provisioning with Azure AD.
+Contact [Cleanmail Support](https://www.alinto.com/contact-email-provider/) to configure Cleanmail to support provisioning with Azure AD.
-## Step 3. Add Alinto Protect from the Azure AD application gallery
+## Step 3. Add Cleanmail from the Azure AD application gallery
-Add Alinto Protect from the Azure AD application gallery to start managing provisioning to Alinto Protect. If you have previously setup Alinto Protect for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Cleanmail from the Azure AD application gallery to start managing provisioning to Cleanmail. If you have previously setup Cleanmail for SSO, you can use the same application. However it's recommended you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned ba
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to Alinto Protect
+## Step 5. Configure automatic user provisioning to Cleanmail
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Alinto Protect based on user and group assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and groups in Cleanmail based on user and group assignments in Azure AD.
-### To configure automatic user provisioning for Alinto Protect in Azure AD:
+### To configure automatic user provisioning for Cleanmail in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ![Enterprise applications blade](common/enterprise-applications.png)
-1. In the applications list, select **Alinto Protect**.
+1. In the applications list, select **Cleanmail**.
- ![The Alinto Protect link in the Applications list](common/all-applications.png)
+ ![The Cleanmail link in the Applications list](common/all-applications.png)
1. Select the **Provisioning** tab.
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
+1. In the **Admin Credentials** section, input your Cleanmail Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Cleanmail. If the connection fails, ensure your Cleanmail account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
1. Select **Save**.
-1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Alinto Protect**.
+1. In the **Mappings** section, select **Synchronize Azure Active Directory Users to Cleanmail**.
-1. Review the user attributes that are synchronized from Azure AD to Alinto Protect in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Alinto Protect for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Alinto Protect API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Cleanmail in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Cleanmail for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Cleanmail API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|Required by Alinto Protect|
+ |Attribute|Type|Supported for filtering|Required by Cleanmail|
||||| |userName|String|&check;|&check; |active|Boolean||&check;
This section guides you through the steps to configure the Azure AD provisioning
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-1. To enable the Azure AD provisioning service for Alinto Protect, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Cleanmail, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-1. Define the users and groups that you would like to provision to Alinto Protect by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and groups that you would like to provision to Cleanmail by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## More resources
active-directory Better Stack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/better-stack-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Better Stack](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Better Stack to support provisioning with Azure AD
-Contact Better Stack support to configure Better Stack to support provisioning with Azure AD.
+You can configure the Azure AD provisioning in the Single Sign-on settings inside the Better Stack dashboard. Once enabled, you'll see the **Tenant ID** and the **Secret token** you can use in the Provisioning settings below. If you need any help, feel free to contact [Better Stack Support](mailto:hello@betterstack.com).
## Step 3. Add Better Stack from the Azure AD application gallery
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
Previously updated : 11/21/2022 Last updated : 04/12/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields (note that the values are case-sensitive):
+1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
1. In the **Identifier** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.YourCiscoServer.com/saml/sp/metadata/<Tunnel_Group_Name>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **Reply URL** text box, type a URL using the following pattern: `https://<YOUR_CISCO_ANYCONNECT_FQDN>/+CSCOE+/saml/sp/acs?tgname=<Tunnel_Group_Name>`
+ > [!NOTE]
+ > `<Tunnel_Group_Name>` is a case-sensitive and the value must not contain dots "." and slashes "/".
+ > [!NOTE] > For clarification about these values, contact Cisco TAC support. Update these values with the actual Identifier and Reply URL provided by Cisco TAC. Contact the [Cisco AnyConnect Client support team](https://www.cisco.com/c/en/us/support/https://docsupdatetracker.net/index.html) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Citi Program Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/citi-program-tutorial.md
Previously updated : 03/26/2023 Last updated : 04/12/2023
Add CITI Program from the Azure AD application gallery to configure single sign-
### Create and assign Azure AD test user
-Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal.
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. CITI Program application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Default Attributes")
-1. In addition to above, CITI Program application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. CITI Program application expects urn:oid named attributes to be passed back in the SAML response, which are shown below. These attributes are also pre-populated but you can review them as per your requirements. These are all required.
| Name | Source Attribute| | | | | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.userprincipalname |
- | urn:oid:0.9.2342.19200300.100.1.3 | user.userprincipalname |
+ | urn:oid:0.9.2342.19200300.100.1.3 | user.mail |
| urn:oid:2.5.4.42 | user.givenname | | urn:oid:2.5.4.4 | user.surname |
+1. If you wish to pass additional information in the SAML response, CITI Program can also accept the following optional attributes.
+
+ | Name | Source Attribute|
+ | | |
+ | urn:oid:2.16.840.1.113730.3.1.241 | user.displayname |
+ | urn:oid:2.16.840.1.113730.3.1.3 | user.employeeid |
+ 1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure CITI Program SSO
-To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create CITI Program test user
-
-In this section, a user called B.Simon is created in CITI Program. CITI Program supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in CITI Program, a new one is commonly created after authentication.
+To configure single sign-on on **CITI Program** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CITI Program support team](mailto:shibboleth@citiprogram.org). This is required to have the SAML SSO connection set properly on both sides.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
* You can use Microsoft My Apps. When you click the CITI Program tile in the My Apps, this will redirect to CITI Program Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+CITI Program supports just-in-time user provisioning. First time SSO users will be prompted to either:
+
+* Link their existing CITI Program account, in the case that they already have one
+![SSOHaveAccount](https://user-images.githubusercontent.com/46728557/228357500-a74489c7-8c5f-4cbe-ad47-9757d3d9fbe6.PNG "Link existing CITI Program account")
+
+* Or Create a new CITI Program account, which is automatically provisioned
+![SSONotHaveAccount](https://user-images.githubusercontent.com/46728557/228357503-f4eba4bb-f3fa-43e9-a98a-f0da87074eeb.PNG "Provision new CITI Program account")
+ ## Additional resources
+* [CITI Program SSO Technical Information](https://support.citiprogram.org/s/article/single-sign-on-sso-and-shibboleth-technical-specs#EntityInformation)
* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md)
## Next steps
active-directory Cobalt Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cobalt-tutorial.md
Previously updated : 11/21/2022 Last updated : 04/12/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://brightside-prod-<INSTANCENAME>.cobaltdl.com` > [!NOTE]
- > The value is not real. Update the value with the actual Sign-On URL. Contact [Cobalt Client support team](https://www.cobalt.net/support/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Sign-On URL. Contact [Cobalt Client support team](https://cobaltio.zendesk.com/hc/requests/new) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. Cobalt application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Cobalt test user
-In this section, you create a user called B.Simon in Cobalt. Work with [Cobalt support team](https://www.cobalt.net/support/) to add the users in the Cobalt platform. Users must be created and activated before you use single sign-on.
+1. Login to the Cobalt website as an administrator.
+1. Navigate to the **People -> Organization** and select Invite Users.
+1. In the overlay that appears, specify the email addresses of users that you want to invite. Enter the email, and then select **Add** or press **Enter**.
+1. Use commas to separate multiple email addresses.
+1. For each user, select a role: **Member** or **Owner**.
+1. Both members and owners have access to all assets and pentests of an organization.
+1. Select **Invite** to confirm.
## Test SSO
active-directory Howspace Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/howspace-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Howspace for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and deprovision user accounts from Azure AD to Howspace.
++
+writer: twimmers
+
+ms.assetid: 4cc83a2e-916c-464b-8a8e-5e68c3aeb9f4
++++ Last updated : 04/12/2023+++
+# Tutorial: Configure Howspace for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Howspace and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Howspace](https://www.howspace.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in Howspace.
+> * Remove users in Howspace when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Howspace.
+> * Provision groups and group memberships in Howspace.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Howspace (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Howspace with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Howspace](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Howspace to support provisioning with Azure AD
+Contact Howspace support to configure Howspace to support provisioning with Azure AD.
+
+## Step 3. Add Howspace from the Azure AD application gallery
+
+Add Howspace from the Azure AD application gallery to start managing provisioning to Howspace. If you have previously setup Howspace for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control provisioning by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to Howspace
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Howspace in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **Howspace**.
+
+ ![Screenshot of the Howspace link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Howspace Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Howspace. If the connection fails, ensure your Howspace account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Howspace**.
+
+1. Review the user attributes that are synchronized from Azure AD to Howspace in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Howspace for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Howspace API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Howspace|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |name.givenName|String||
+ |name.familyName|String||
+ |phoneNumbers[type eq "work"].value|String||
+ |externalId|String||
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Howspace**.
+
+1. Review the group attributes that are synchronized from Azure AD to Howspace in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Howspace for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Howspace|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Howspace, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Howspace by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Salesforce Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-provisioning-tutorial.md
For more information on how to read the Azure AD provisioning logs, see [Reporti
* The credentials used have admin access to Salesforce. * The version of Salesforce that you are using supports Web Access (e.g. Developer, Enterprise, Sandbox, and Unlimited editions of Salesforce.) * Web API access is enabled for the user.
-* The Azure AD provisioning service supports provisioning language, locale, and timeZone for a user. These attributes are in the default attribute mappings but do not have a default source attribute. Ensure that you select the default source attribute and that the source attribute is in the format expected by SalesForce. For example, localeSidKey for english(UnitedStates) is en_US. Review the guidance provided [here](https://help.salesforce.com/articleView?id=setting_your_language.htm&type=5) to determine the proper localeSidKey format. The languageLocaleKey formats can be found [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5). In addition to ensuring that the format is correct, you may need to ensure that the language is enabled for your users as described [here](https://help.salesforce.com/articleView?id=setting_your_language.htm&type=5).
+* The Azure AD provisioning service supports provisioning language, locale, and timeZone for a user. These attributes are in the default attribute mappings but do not have a default source attribute. Ensure that you select the default source attribute and that the source attribute is in the format expected by SalesForce. For example, localeSidKey for english(UnitedStates) is en_US. Review the guidance provided [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5) to determine the proper localeSidKey format. The languageLocaleKey formats can be found [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5). In addition to ensuring that the format is correct, you may need to ensure that the language is enabled for your users as described [here](https://help.salesforce.com/articleView?id=faq_getstart_what_languages_does.htm&type=5).
* **SalesforceLicenseLimitExceeded:** The user could not be created in the target application because there are no available licenses for this user. Either procure additional licenses for the target application, or review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes. * **SalesforceDuplicateUserName:** The user cannot be provisioned because it has a Salesforce.com 'Username' that is duplicated in another Salesforce.com tenant.ΓÇ» In Salesforce.com, values for the 'Username' attribute must be unique across all Salesforce.com tenants.ΓÇ» By default, a userΓÇÖs userPrincipalName in Azure Active Directory becomes their 'Username' in Salesforce.com.ΓÇ» You have two options.ΓÇ» One option is to find and rename the user with the duplicate 'Username' in the other Salesforce.com tenant, if you administer that other tenant as well.ΓÇ» The other option is to remove access from the Azure Active Directory user to the Salesforce.com tenant with which your directory is integrated. We will retry this operation on the next synchronization attempt. * **SalesforceRequiredFieldMissing:** Salesforce requires certain attributes to be present on the user to successfully create or update the user. This user is missing one of the required attributes. Ensure that attributes such as email and alias are populated on all users that you would like to be provisioned into Salesforce. You can scope users that don't have these attributes out using [attribute based scoping filters](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Vera Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vera-suite-tutorial.md
Previously updated : 03/31/2023 Last updated : 04/12/2023
Complete the following steps to enable Azure AD single sign-on in the Azure port
1. In the Azure portal, on the **Vera Suite** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-
-1. On the **Basic SAML Configuration** section, perform the following steps:
-
- a. In the **Identifier** textbox, type the URL:
- `https://logon.mykpa.com/identity/Saml2/`
-
- b. In the **Reply URL** textbox, type the URL:
- `https://logon.mykpa.com/identity/Saml2/Acs`
-
- c. In the **Sign on URL** textbox, type one of the following URLs:
-
- | **Sign on URL** |
- |-|
- | `https://www.verasuite.com` |
- | `https://logon.mykpa.com` |
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
active-directory Hipaa Access Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-access-controls.md
+
+ Title: Configure Azure Active Directory HIPAA access control safeguards
+description: Guidance on how to configure Azure AD HIPAA access control safeguards
+++++++++ Last updated : 04/13/2023++++
+# Access control safeguard guidance
+
+Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, implement the safeguards using this guidance. You might need to modify other configurations or processes.
+
+To understand the **User Identification Safeguard**, we recommend you research and set objectives that enable you to:
+
+* Ensure IDs are unique to everyone that needs to connect to the domain.
+
+* Establish a Joiner, Mover, and Leaver (JML) process.
+
+* Enabler auditing for identity tracking.
+
+For the **Authorized Access Control Safeguard**, set objectives so that:
+
+* System access is limited to authorized users.
+
+* Authorized users are identified.
+
+* Access to personal data is limited to authorized users.
+
+For the **Emergency Access Procedure Safeguard**:
+
+* Ensure high availability of core services.
+
+* Eliminate single points of failure.
+
+* Establish a disaster recovery plan.
+
+* Ensure backups of high-risk data.
+
+* Establish and maintain emergency access accounts.
+
+For the **Automatic Logoff Safeguard**:
+
+* Establish a procedure that terminates an electronic session after a predetermined time of inactivity.
+
+* Configure and implement an automatic sign out policy.
+
+## Unique user identification
+
+The following table has access control safeguards from the HIPAA guidance for unique user identification. Find Microsoft recommendations to meet safeguard implementation requirements.
+
+**HIPAA safeguard - unique user identification**
+
+```Assign a unique name and/or number for identifying and tracking user identity.```
+
+| Recommendation | Action |
+| - | - |
+| Set up hybrid to utilize Azure AD | [Azure AD Connect](../hybrid/how-to-connect-install-express.md) integrates on-premises directories with Azure AD, supporting the use of single identities to access on-premises applications and cloud services such as Microsoft 365. It orchestrates synchronization between Active Directory (AD) and Azure AD. To get started with Azure AD Connect review the prerequisites, making note of the server requirements and how to prepare your Azure AD tenant for management.</br>[Azure AD Connect sync](../cloud-sync/tutorial-pilot-aadc-aadccp.md) is a provisioning agent that is managed on the cloud. The provisioning agent supports synchronizing to Azure AD from a multi-forest disconnected AD environment. Lightweight agents are installed and can be used with Azure AD connect.</br>We recommend you use **Password Hash Sync** to help reduce the number of passwords and protect against leaked credential detection.|
+| Provision user accounts |[Azure AD](../fundamentals/add-users-azure-active-directory.md) is a cloud-based identity and access management service that provides single sign-on, multi-factor authentication and Conditional Access to guard against security attacks. To create a user account, sign in into the Azure AD portal as a **User Admin** and create a new account by navigating to [All users](../fundamentals/add-users-azure-active-directory.md) in the menu.</br>Azure AD provides support for automated user provisioning for systems and applications. Capabilities include creating, updating, and deleting a user account. Automated provisioning creates new accounts in the right systems for new people when they join a team in an organization, and automated deprovisioning deactivates accounts when people leave the team. Configure provisioning by navigating to the Azure AD portal and selecting [enterprise applications](../app-provisioning/configure-automatic-user-provisioning-portal.md) to add and manage the app settings. |
+|HR-driven provisioning | [Integrating Azure AD account provisioning](../app-provisioning/plan-cloud-hr-provision.md) within a Human Resources (HR) system reduces the risk of excessive access and access no longer required. The HR system becomes the start-of-authority, for newly created accounts, extending the capabilities to account deprovisioning. Automation manages the identity lifecycle and reduces the risk of over-provisioning. This approach follows the security best practice of providing least privilege access. |
+| Create lifecycle workflows | [Lifecycle workflows](../governance/understanding-lifecycle-workflows.md) provide identity governance for automating the joiner/mover/leaver (JML) lifecycle. Lifecycle workflows centralize the workflow process by either using the [built-in templates](../governance/lifecycle-workflow-templates.md) or creating your own custom workflows. this practice helps reduce or potentially remove manual tasks for organizational JML strategy requirements. Within the Azure portal, navigate to **Identity Governance** in the Azure AD menu to review or configure tasks that fit within your organizational requirements. |
+| Manage privileged identities | [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) enables management, control, and the ability to monitor access. You provide access when it's needed, on a time-based and approval-based role activation. This approach limits the risk of excessive, unnecessary, or misused access permissions. |
+| Monitoring and alerting | [Identity Protection](../identity-protection/overview-identity-protection.md) provides a consolidated view into risk events and potential vulnerabilities that could affect an organizationΓÇÖs identities. Enabling the protection applies the existing Azure AD anomaly detection capabilities and introduces risk event types that detect anomalies in real-time. Through the Azure AD portal, you can sign-in, audit, and review provisioning logs.</br>The logs can be [downloaded, archived, and streamed](../reports-monitoring/howto-download-logs.md) to your security information and event management (SIEM) tool. Azure AD logs can be located in the monitoring section of the Azure AD menu. The logs can also be sent to [Azure Monitor](../reports-monitoring/concept-activity-logs-azure-monitor.md) using an Azure log analytics workspace where you can set up alerting on the connected data.</br>Azure AD uniquely identifies users via the [ID property](/graph/api/resources/user?view=graph-rest-1.0&preserve-view=true) on the respective directory object. This approach enables you to filter for specific identities in the log files. |
+
+## Authorized access control
+
+The following table has HIPAA guidance for access control safeguards for authorized access control. Find Microsoft recommendations to meet safeguard implementation requirements.
+
+**HIPAA safeguard - authorized access control**
+
+```Person or entity authentication, implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.```
+
+| Recommendation | Action |
+| - | - |
+Enable multi-factor authentication (MFA) | [MFA in Azure AD](../authentication/concept-mfa-howitworks.md) protects identities by adding another layer of security. The extra layer authentication is effective in helping prevent unauthorized access. Using an MFA approach enables you to require more validation of sign in credentials during the authentication process. Examples include setting up the [Authenticator app](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) for one-click verification, or enabling [passwordless authentication](../authentication/concept-authentication-passwordless.md). |
+| Enable Conditional Access (CA) policies | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies help organizations restrict access to approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. |
+| Enable role-based access control (RBAC) | [RBAC](../roles/custom-overview.md) provides security on an enterprise level with the concept of separation of duties. RBAC enables you to adjust and review permissions to protect confidentiality, privacy and access management to resources and sensitive data along with the systems.</br>Azure AD provides support for [built-in roles](../roles/permissions-reference.md), which is a fixed set of permissions that can't be modified. You can also create your own [custom roles](../roles/custom-create.md) where you can add a preset list. |
+| Enable attribute-based access control (ABAC) | [ABAC](../../role-based-access-control/conditions-overview.md) defines access based on attributes associated with security principles, resources, and environment. It provides fine-grained access control and reduces the number of role assignments. The use of ABAC can be scoped to the content within the dedicated Azure storage. |
+| Configure user groups access in SharePoint | [SharePoint groups](/sharepoint/dev/general-development/authorization-users-groups-and-the-object-model-in-sharepoint) are a collection of users. The permissions are scoped to the site collection level for access to the content. Application of this constraint can be scoped to service accounts that require data flow access between applications. |
+
+## Emergency access procedure
+
+The following table has HIPAA guidance access control safeguards for emergency access procedures. Find Microsoft recommendations to meet safeguard implementation requirements.
+
+**HIPAA safeguard - emergency access procedure**
+
+```Establish (and implement as needed) procedures and policies for obtaining necessary electronic protected health information during an emergency or occurrence.```
+
+| Recommendation | Action |
+| - | - |
+| Use Azure Recovery Services | [Azure Backups](../../backup/backup-architecture.md) provide the support required to back up vital and sensitive data. Coverage includes storage/databases and cloud infrastructure, along with on-premises windows devices to the cloud. Establish [backup policies](../../backup/backup-architecture.md#backup-policy-essentials) to address backup and recovery process risks. Ensure data is safely stored and can be retrieved with minimal downtime. </br>Azure Site Recovery provides near-constant data replication to ensure copies of are in sync. Initial steps prior to setting up the service are to determine the recovery point objective (RPO) and recovery time objective (RTO) to support your organizational requirements. |
+| Ensure resiliency | [Resiliency](/azure/architecture/framework/resiliency/overview) helps to maintain service levels when there's disruption to business operations and core IT services. The capability spans services, data, Azure AD and AD considerations. Determining a strategic [resiliency plan](/azure/architecture/checklist/resiliency-per-service) to include what systems and data rely on Azure AD and hybrid environments. [Microsoft 365 resiliency](/compliance/assurance/assurance-sharepoint-onedrive-data-resiliency) covering the core services, which include Exchange, SharePoint, and OneDrive to protect against data corruption and applying resiliency data points to protect ePHI content. |
+| Create break glass accounts | Establishing an emergency or a [break glass account](../roles/security-emergency-access.md) ensures that system and services can still be accessed in unforeseen circumstances, such as network failures or other reasons for administrative access loss. We recommend you don't associate this account with an [individual user](../authentication/concept-authentication-passwordless.md) or account. |
+
+## Workstation security - automatic logoff
+
+The following table has HIPAA guidance on the automatic logoff safeguard. Find Microsoft recommendations to meet safeguard implementation requirements.
+
+**HIPAA safeguard - automatic logoff**
+
+```Implement electronic procedures that terminate an electronic session after a predetermined time of inactivity.| Create a policy and procedure to determine the length of time that a user is allowed to stay logged on, after a predetermined period of inactivity.```
+
+| Recommendation | Action |
+| - | - |
+| Create group policy | Support for devices not migrated to Azure AD and managed by Intune, [Group Policy (GPO)](../../active-directory-domain-services/manage-group-policy.md) can enforce sign out, or lock screen time for devices on AD, or in hybrid environments. |
+| Assess device management requirements | [Microsoft IntTune](/mem/intune/fundamentals/what-is-intune) provides mobile device management (MDM) and mobile application management (MAM). It provides control over company and personal devices. You can manage device usage and enforce policies to control mobile applications. |
+| Device Conditional Access policy | Implement device lock by using a conditional access policy to restrict access to [compliant](../conditional-access/concept-conditional-access-grant.md) or hybrid Azure AD joined devices. Configure [policy settings](../conditional-access/concept-conditional-access-grant.md#require-hybrid-azure-ad-joined-device).</br>For unmanaged devices, configure the [Sign-In Frequency](../conditional-access/howto-conditional-access-session-lifetime.md) setting to force users to reauthenticate. |
+| Configure session time out for Microsoft 365 | Review the [session timeouts](/microsoft-365/admin/manage/idle-session-timeout-web-apps) for Microsoft 365 applications and services, to amend any prolonged timeouts. |
+| Configure session time out for Azure portal | Review the [session timeouts for Azure portal session](../../azure-portal/set-preferences.md), by implementing a timeout due to inactivity it helps to protect resources from unauthorized access. |
+| Review application access sessions | [Continuous access evaluation](../conditional-access/concept-continuous-access-evaluation.md) policies can deny or grant access to applications. If the sign-in is successful, the user is given an access token that is valid for one (1) hour. Once the access token expires the client is directed back to Azure AD, conditions are reevaluated, and the token is refreshed for another hour. |
+
+## Learn more
+
+* [Zero Trust Pillar: Identity, Devices](/security/zero-trust/zero-trust-overview)
+
+* [Zero Trust Pillar: Identity, Data](/security/zero-trust/zero-trust-overview)
+
+* [Zero Trust Pillar: Devices, Identity, Application](/security/zero-trust/zero-trust-overview)
+
+## Next steps
+
+* [Access Controls Safeguard guidance](hipaa-access-controls.md)
+
+* [Audit Controls Safeguard guidance](hipaa-audit-controls.md)
+
+* [Other Safeguard guidance](hipaa-other-controls.md)
active-directory Hipaa Audit Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-audit-controls.md
+
+ Title: Configure Azure Active Directory HIPAA audit control safeguards
+description: Guidance on how to configure Azure Active Directory HIPAA audit control safeguards
+++++++++ Last updated : 04/13/2023++++
+# Audit controls safeguard guidance
+
+Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, implement the safeguards using this guidance, with other needed configurations or processes.
+
+For the audit controls:
+
+* Establish data governance for personal data storage.
+
+* Identify and label sensitive data.
+
+* Configure audit collection and secure log data.
+
+* Configure data loss prevention.
+
+* Enable information protection.
+
+For safeguard:
+
+* Determine where Protected Health Information (PHI) data is stored.
+
+* Identify and mitigate any risks for data that is stored.
+
+This article provides relevant HIPAA safeguard wording, followed by a table with Microsoft recommendations and guidance to help achieve HIPAA compliance.
+
+## Audit controls
+
+The following content is safeguard guidance from HIPAA. Find Microsoft recommendations to meet safeguard implementation requirements.
+
+**HIPAA safeguard - audit controls**
+
+```Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.```
+
+| Recommendation | Action |
+| - | - |
+| Enable Microsoft Purview | [Microsoft Purview](/purview/purview) helps to manage and monitor data by providing data governance. Using Purview helps to minimize compliance risks and meet regulatory requirements.</br>Microsoft Purview in the governance portal provides a [unified data governance](/microsoft-365/compliance/manage-data-governance) service that helps you manage your on-premises, multicloud and Software-as-service (SaaS) data.</br>Microsoft Purview is a framework, a suite of products that work together to provide visualization of sensitive data lifecycle protection for data, and data loss prevention. |
+| Enable Microsoft Sentinel | [Microsoft Sentinel](../../sentinel/overview.md) provides security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions. Microsoft Sentinel collects audit logs and uses built-in AI to help analyze large volumes of data. </br>SIEM enables an organization to detect incidents that could go undetected. |
+| Configure Azure Monitor | [Use Azure Monitor Logs](../../azure-monitor/logs/data-security.md) collects and organizes logs, expanding to cloud and hybrid environments. It provides recommendations on key areas on how to protect resources combined with Azure trust center. |
+| Enable logging and monitoring | </br>[Logging and monitoring](/security/benchmark/azure/security-control-logging-monitoring) are essential to securing an environment. The data supports investigations and helps detect potential threats by identifying unusual patterns. Enable logging and monitoring of services to reduce the risk of unauthorized access.</br>We recommend you monitor [Azure AD activity logs](../reports-monitoring/howto-access-activity-logs.md). |
+| Scan environment for electronic protected health information (ePHI) data | [Microsoft Purview](../../purview/overview.md) can be enabled in audit mode to scan what ePHI is sitting in the data estate and the resources that being used to store that data. This capability helps in establishing data classification and labeling based on the sensitivity of the data. |
+| Create a data loss prevention (DLP) policy | DLP policies help establish processes to ensure that sensitive data isn't lost, misused, or accessed by unauthorized users. It prevents data breaches and exfiltration.</br>[Microsoft Purview DLP](/microsoft-365/compliance/dlp-policy-reference) examines email messages, navigate to the Microsoft Purview compliance portal to review the polices and customize them for your organization. |
+| Enable monitoring through Azure Policy | [Azure Policy](../../governance/policy/overview.md) helps to enforce organizational standards, and enables the ability to assess the state of compliance across an environment. This approach ensures consistency, regulatory compliance and monitoring providing security recommendations through [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) |
+| Assess device management requirements | [Microsoft Intune](/mem/intune/) can be used to provide mobile device management (MDM) and mobile application management (MAM). Microsoft Intune provides control over company and personal devices. Capabilities include managing how devices can be used and enforcing policies that give you direct control over mobile applications. |
+| Application protection | Microsoft Intune can help establish a [data protection framework](/mem/intune/apps/app-protection-policy) that covers the Microsoft 365 office applications, and incorporating them across devices. App protection policies ensure that organizational data remains safe and contained in the app on both personal (BYOD) to corporate owned devices. |
+| Configure insider risk management | Microsoft Purview [Insider Risk Management](/microsoft-365/compliance/insider-risk-management-solution-overview) correlates signals to identify potential malicious or inadvertent insider risks, such as IP theft, data leakage, and security violations. Insider Risk Management enables you to create policies to manage security and compliance. This capability is built upon the principle of privacy by design, users are pseudonymized by default, and role-based access controls and audit logs are in place to help ensure user-level privacy. |
+| Configure communication compliance | Microsoft Purview [Communication Compliance](/microsoft-365/compliance/communication-compliance-solution-overview) provides the tools to help organizations detect regulatory compliance such as compliance for Securities and Exchange Commission (SEC) or Financial Industry Regulatory Authority (FINRA) standards. The tool monitors for business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content. This capability is built with privacy by design, usernames are pseudonymized by default, role-based access controls are built in, investigators are opted in by an admin, and audit logs are in place to help ensure user-level privacy. |
+
+## Safeguard controls
+
+The following content provides the safeguard controls guidance from HIPAA. Find Microsoft recommendations to meet HIPAA compliance.
+
+**HIPAA - safeguard**
+
+```Conduct an accurate and thorough safeguard of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity.```
+
+| Recommendation | Action |
+| - | - |
+| Scan environment for ePHI data | [Microsoft Purview](../../purview/overview.md) can be enabled in audit mode to scan what ePHI is sitting in the data estate, and the resources that are being used to store that data. This information helps in establishing data classification and labeling the sensitivity of the data.</br>In addition, using [Content Explorer](/microsoft-365/compliance/data-classification-content-explorer) provides visibility into where the sensitive data is located. This information helps start the labeling journey from manually applying labeling or labeling recommendations on the client-side to service-side autolabeling. |
+| Enable Priva to safeguard Microsoft 365 data | [Microsoft Priva](/privacy/priva/priva-overview) evaluate ePHI data stored in Microsoft 365, scanning, and evaluating for sensitive information. |
+|Enable Azure Security benchmark |[Microsoft cloud security benchmark](/security/benchmark/azure/introduction) provides control for data protection across Azure services and provides a baseline for implementation for services that store ePHI. Audit mode provides those recommendations and remediation steps to secure the environment. |
+| Enable Defender Vulnerability Management | [Microsoft Defender Vulnerability management](../../defender-for-cloud/remediate-vulnerability-findings-vm.md) is a built-in module in **Microsoft Defender for Endpoint**. The module helps you identify and discover vulnerabilities and misconfigurations in real-time. The module also helps you prioritize presenting the findings in a dashboard, and reports across devices, VMs and databases. |
+
+## Learn more
+
+* [Zero Trust Pillar: Devices, Data, Application, Visibility, Automation and Orchestration](/security/zero-trust/zero-trust-overview)
+
+* [Zero Trust Pillar: Data, Visibility, Automation and Orchestration](/security/zero-trust/zero-trust-overview)
+
+## Next steps
+
+* [Access Controls Safeguard guidance](hipaa-access-controls.md)
+
+* [Audit Controls Safeguard guidance](hipaa-audit-controls.md)
+
+* [Other Safeguard guidance](hipaa-other-controls.md)
active-directory Hipaa Configure Azure Active Directory For Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-configure-azure-active-directory-for-compliance.md
+
+ Title: Configure Azure Active Directory for HIPAA compliance
+description: Introduction for guidance on how to configure Azure Active Directory for HIPAA compliance level.
+++++++++ Last updated : 04/13/2023++++
+# Configuring Azure Active Directory for HIPAA compliance
+
+Microsoft services such as Azure Active Directory (Azure AD) can help you meet identity-related requirements for the Health Insurance Portability and Accountability Act of 1996 (HIPAA).
+
+The HIPAA Security Rule (HSR) establishes national standards to protect individualsΓÇÖ electronic personal health information that is created, received, used, or maintained by a covered entity. The HSR is managed by the U.S. Department of Health and Human Services (HHS) and requires appropriate administrative, physical, and technical safeguards to ensure the confidentiality, integrity, and security of electronic protected health information.
+
+Technical safeguards requirements and objectives are defined in Title 45 of the Code of Federal Regulations (CFRs). Part 160 of Title 45 provides the general administrative requirements, and Part 164ΓÇÖs subparts A and C describe the security and privacy requirements.
+
+Subpart § 164.304 defines technical safeguards as the technology and the policies and procedures for its use that protect electronic protected health information and control access to it. The HHS also outlines key areas for healthcare organizations to consider when implementing HIPAA technical safeguards. From [§ 164.312 Technical safeguards](https://www.ecfr.gov/current/title-45/section-164.312):
+
+* **Access controls** - Implement technical policies and procedures for electronic information systems that maintain electronic protected health information to allow access only to those persons or software programs that have been granted access rights as specified in [§ 164.308(a)(4)](https://www.ecfr.gov/current/title-45/section-164.308).
+
+* **Audit controls** - Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic protected health information.
+
+* **Integrity controls** - Implement policies and procedures to protect electronic protected health information from improper alteration or destruction.
+
+* **Person or entity authentication** - Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.
+
+* **Transmission security** - Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.
+
+The HSR defines subparts as standard, along with required and addressable implementation specifications. All must be implemented. The "addressable" designation denotes a specification is reasonable and appropriate. Addressable doesn't mean that an implementation specification is optional. Therefore, subparts that are defined as addressable are also required.
+
+The remaining articles in this series provide guidance and links to resources, organized by key areas and technical safeguards. For each key area, there's a table with the relevant safeguards listed, and links to Azure Active Directory (Azure AD) guidance to accomplish the safeguard.
+
+## Learn more
+
+* [HHS Zero Trust in Healthcare pdf](https://www.hhs.gov/sites/default/files/zero-trust.pdf)
+
+* [Combined regulation text](https://www.hhs.gov/ocr/privacy/hipaa/administrative/combined/https://docsupdatetracker.net/index.html?language=es) of all HIPAA Administrative Simplification Regulations found at 45 CFR 160, 162, and 164
+
+* [Code of Federal Regulations (CFR) Title 45](https://www.ecfr.gov/current/title-45) describing the public welfare portion of the regulation
+
+* [Part 160](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-160?toc=1) describing the general administrative requirements of Title 45
+
+* [Part 164](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164) Subparts A and C describing the security and privacy requirements of Title 45
+
+* [HIPAA Security Risk Safeguard Tool](https://www.healthit.gov/providers-professionals/security-risk-assessment-tool)
+
+* [NIST HSR Toolkit](http://scap.nist.gov/hipaa/)
+
+## Next steps
+
+* [Access Controls Safeguard guidance](hipaa-access-controls.md)
+
+* [Audit Controls Safeguard guidance](hipaa-audit-controls.md)
+
+* [Other Safeguard guidance](hipaa-other-controls.md)
active-directory Hipaa Other Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/hipaa-other-controls.md
+
+ Title: Configure Azure Active Directory HIPAA additional safeguards
+description: Guidance on how to configure Azure Active Directory HIPAA additional control safeguards
+++++++++ Last updated : 04/13/2023++++
+# Other safeguard guidance
+
+Azure Active Directory (Azure AD) meets identity-related practice requirements for implementing Health Insurance Portability and Accountability Act of 1996 (HIPAA) safeguards. To be HIPAA compliant, it's the responsibility of companies to implement the safeguards using this guidance along with any other configurations or processes needed. This article contains guidance for achieving HIPAA compliance for the following three controls:
+
+* Integrity Safeguard
+* Person or Entity Authentication Safeguard
+* Transmission Security Safeguard
+
+## Integrity safeguard guidance
+
+Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed.
+
+For the **Data Modification Safeguard**:
+
+* Protect files and emails, across all devices.
+
+* Discover and classify sensitive data.
+
+* Encrypt documents and emails that contain sensitive or personal data.
+
+The following content provides the guidance from HIPAA followed by a table with Microsoft's recommendations and guidance.
+
+**HIPAA - integrity**
+
+```Implement security measures to ensure that electronically transmitted electronic protected health information isn't improperly modified without detection until disposed of.```
+
+| Recommendation | Action |
+| - | - |
+| Enable Microsoft Purview Information Protection (IP) | Discover, classify, protect, and govern sensitive data, covering storage and data transmitted.</br>Protecting your data through [Microsoft Purview IP](/microsoft-365/compliance/information-protection-solution) helps determine the data landscape, review the framework and take active steps to identify and protect your data. |
+| Configure Exchange In-place hold | Exchange online provides several settings to support eDiscovery. [In-place hold](/exchange/security-and-compliance/in-place-ediscovery/assign-ediscovery-permissions) uses specific parameters on what items should be held. The decision matrix can be based on keywords, senders, receipts, and dates.</br>[Microsoft Purview eDiscovery solutions](/microsoft-365/compliance/ediscovery) is part of the Microsoft Purview compliance portal and covers all Microsoft 365 data sources. |
+| Configure Secure/Multipurpose Internet Mail extension on Exchange Online | [S/MIME](/microsoft-365/compliance/email-encryption) is a protocol that is used for sending digitally signed and encrypted messages. It's based on asymmetric key pairing, a public and private key.</br>[Exchange Online](/exchange/security-and-compliance/smime-exo/configure-smime-exo) provides encryption and protection of the content of the email and signatures that verify the identity of the sender. |
+| Enable monitoring and logging. | [Logging and monitoring](/security/benchmark/azure/security-control-logging-monitoring) are essential to securing an environment. The information is used to support investigations and help detect potential threats by identifying unusual patterns. Enable logging and monitoring of services to reduce the risk of unauthorized access.</br>[Microsoft Purview](/microsoft-365/compliance/audit-solutions-overview) auditing provides visibility into audited activities across services in Microsoft 365. It helps investigations by increasing audit log retention. |
+
+## Person or entity authentication safeguard guidance
+
+Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed.
+
+For the Audit and Person and Entity Safeguard:
+
+* Ensure that the end user claim is valid for data access.
+
+* Identify and mitigate any risks for data that is stored.
+
+The following content provides the guidance from HIPAA followed by a table with Microsoft's recommendations and guidance.
+
+**HIPAA - person or entity authentication**
+
+```Implement procedures to verify that a person or entity seeking access to electronic protected health information is the one claimed.```
+
+Ensure that users and devices that access ePHI data are authorized. You must ensure devices are compliant and actions are audited to flag risks to the data owners.
+
+| Recommendation | Action |
+| - | - |
+|Enable multi-factor authentication (MFA) | [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) protects identities by adding an extra layer of security. The extra layer provides an effective way to prevent unauthorized access. MFA enables the requirement of more validation of sign in credentials during the authentication process. Setting up the [Authenticator app](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) provides one-click verification, or you can configure [Azure AD passwordless configuration](../authentication/concept-authentication-passwordless.md). |
+| Enable Conditional Access policies | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies help to restrict access to only approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. |
+| Set up device based Conditional Access Policy | [Conditional Access with Microsoft Intune](/mem/intune/protect/conditional-access) for device management and Azure AD policies can use device status to either grant deny access to your services and data. By deploying device compliance policies, it determines if it meets security requirements to make decisions to either allow access to the resources or deny them. |
+| Use role-based access control (RBAC) | [RBAC in Azure AD](../roles/custom-overview.md) provides security on an enterprise level, with separation of duties. Adjust and review permissions to protect confidentiality, privacy and access management to resources and sensitive data, with the systems.</br>Azure AD provides support for [built-in roles](../roles/permissions-reference.md), which is a fixed set of permissions that can't be modified. You can also create your own [custom roles](../roles/custom-create.md) where you can add a preset list. |
+
+## Transmission security safeguard guidance
+
+Azure Active Directory meets identity-related practice requirements for implementing HIPAA safeguards. To be HIPAA compliant, implement the safeguards using this guidance along with any other configurations or processes needed.
+
+For encryption:
+
+* Protect data confidentiality.
+
+* Prevent data theft.
+
+* Prevent unauthorized access to PHI.
+
+* Ensure encryption level on data.
+
+To protect transmission of PHI data:
+
+* Protect sharing of PHI data.
+
+* Protect access to PHI data.
+
+* Ensure data transmitted is encrypted.
+
+The following content provides a list of the Audit and Transmission Security Safeguard guidance from the HIPAA guidance and MicrosoftΓÇÖs recommendations to enable you to meet the safeguard implementation requirements with Azure AD.
+
+**HIPAA - encryption**
+
+```Implement a mechanism to encrypt and decrypt electronic protected health information.```
+
+Ensure that ePHI data is encrypted and decrypted with the compliant encryption key/process.
+
+| Recommendation | Action |
+| - | - |
+| Review Microsoft 365 encryption points | [Encryption with Microsoft Purview in Microsoft 365](/microsoft-365/compliance/encryption) is a highly secure environment that offers extensive protection in multiple layers: the physical data center, security, network, access, application, and data security. </br>Review the encryption list and amend if more control is required. |
+| Review database encryption | [Transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver16&preserve-view=true) adds a layer of security to help protect data at rest from unauthorized or offline access. It encrypts the database using AES encryption.</br>[Dynamic data masking for sensitive data](/azure/azure-sql/database/dynamic-data-masking-overview), which limits sensitive data exposure. It masks the data to nonauthorized users. The masking includes designated fields, which you define in a database schema name, table name, and column name. </br>New databases are encrypted by default, and the database encryption key is protected by a built-in server certificate. We recommend you review databases to ensure encryption is set on the data estate. |
+| Review Azure Encryption points | [Azure encryption capability](../../security/fundamentals/encryption-overview.md) covers major areas from data at rest, encryption models, and key management using Azure Key Vault. Review the different encryption levels and how they match to scenarios within your organization. |
+| Assess data collection and retention governance | [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management) enables you to apply retention policies. [Microsoft Purview Records Management](/microsoft-365/compliance/get-started-with-records-management) enables you to apply retention labels. This strategy helps you gain visibility into assets across the entire data estate. This strategy also helps you safeguard and manage sensitive data across clouds, apps, and endpoints.</br>**Important:** As noted in [45 CFR 164.316](https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-C/part-164/subpart-C/section-164.316): **Time limit (Required)**. Retain the documentation required by [paragraph (b)(1)](https://www.ecfr.gov/current/title-45/section-164.316) of this section for six years from the date of creation, or the date when it last was in effect, whichever is later. |
+
+**HIPAA - protect transmission of PHI data**
+
+```Implement technical security measures to guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.```
+
+Establish policies and procedures to protect data exchange that contains PHI data.
+
+| Recommendation | Action |
+| - | - |
+ | Assess the state of on-premises applications | [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) implementation publishes on-premises web applications externally and in a secure manner.</br>Azure AD Application Proxy enables you to securely publish an external URL endpoint into Azure. |
+| Enable multi-factor authentication (MFA) | [Azure AD MFA](../authentication/concept-mfa-howitworks.md) protects identities by adding a layer of security. Adding more layers of security is an effective way to prevent unauthorized access. MFA enables the requirement of more validation of sign in credentials during the authentication process. You can configure the [Authenticator](https://support.microsoft.com/account-billing/set-up-an-authenticator-app-as-a-two-step-verification-method-2db39828-15e1-4614-b825-6e2b524e7c95) app to provide one-click verification or passwordless authentication. |
+| Enable conditional access policies for application access | [Conditional Access](../conditional-access/concept-conditional-access-policies.md) policies helps to restrict access to approved applications. Azure AD analyses signals from either the user, device, or the location to automate decisions and enforce organizational policies for access to resources and data. |
+| Review Exchange Online Protection (EOP) policies | [Exchange Online spam and malware protection](/office365/servicedescriptions/exchange-online-protection-service-description/exchange-online-protection-feature-details?tabs=Anti-spam-and-anti-malware-protection) provides built-in malware and spam filtering. EOP protects inbound and outbound messages and is enabled by default. EOP services also provide anti-spoofing, quarantining messages, and the ability to report messages in Outlook. </br>The policies can be customized to fit company-wide settings, these take precedence over the default policies. |
+| Configure sensitivity labels | [Sensitivity labels](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites) from Microsoft Purview enable you to classify and protect your organizations data. The labels provide protection settings in documentation to containers. For example, the tool protects documents that are stored in Microsoft Teams and SharePoint sites, to set and enforce privacy settings. Extend labels to files and data assets such as SQL, Azure SQL, Azure Synapse, Azure Cosmos DB and AWS RDS. </br>Beyond the 200 out-of-the-box sensitive info types, there are advanced classifiers such as names entities, trainable classifiers, and EDM to protect custom sensitive types. |
+| Assess whether a private connection is required to connect to services | [Azure ExpressRoute](../../expressroute/expressroute-introduction.md) creates private connections between cloud-based Azure datacenters and infrastructure that resides on-premises. Data isn't transferred over the public internet. </br>The service uses layer 3 connectivity, connects the edge router, and provides dynamic scalability. |
+| Assess VPN requirements | [VPN Gateway documentation](../../vpn-gateway/vpn-gateway-about-vpngateways.md) connects an on-premises network to Azure through site-to-site, point-to-site, VNet-to-VNet and multisite VPN connection.</br>The service supports hybrid work environments by providing secure data transit. |
+
+## Learn more
+
+* [Zero Trust Pillar: Data](/security/zero-trust/zero-trust-overview)
+
+* [Zero Trust Pillar: Identity, Networks, Infrastructure, Data, Applications](/security/zero-trust/zero-trust-overview)
+
+## Next steps
+
+* [Access Controls Safeguard guidance](hipaa-access-controls.md)
+
+* [Audit Controls Safeguard guidance](hipaa-audit-controls.md)
+
+* [Other Safeguard guidance](hipaa-other-controls.md)
+
active-directory How To Opt Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-opt-out.md
Title: Opt out of the Microsoft Entra Verified ID
+ Title: Opt out of Microsoft Entra Verified ID
description: Learn how to Opt Out of Entra Verified ID documentationCenter: ''
#Customer intent: As an administrator, I am looking for information to help me disable
-# Opt out of the verifiable credentials
+# Opt out of Verified ID service
[!INCLUDE [Verifiable Credentials announcement](../../../includes/verifiable-credentials-brand.md)]
In this article:
## When do you need to opt out?
-Opting out is a one-way operation, after you opt-out your Entra Verified ID environment will be reset. Opting out may be required to:
+Opting out is a one-way operation. After you opt-out, your Entra Verified ID environment is reset. Opting out may be required to:
- Enable new service capabilities. - Reset your service configuration. - Switch between trust systems ION and Web
-## What happens to your data when you opt-out?
+## What happens to your data?
-When you complete opting out of the Microsoft Entra Verified ID service, the following actions will take place:
+When you complete opting out of the Microsoft Entra Verified ID service, the following actions take place:
-- The DID keys in Key Vault will be [soft deleted](../../key-vault/general/soft-delete-overview.md).-- The issuer object will be deleted from our database.-- The tenant identifier will be deleted from our database.-- All of the verifiable credentials contracts will be deleted from our database.
+- The DID keys in Key Vault are [soft deleted](../../key-vault/general/soft-delete-overview.md).
+- The issuer object is deleted from our database.
+- The tenant identifier is deleted from our database.
+- All of the verifiable credentials contracts are deleted from our database.
-Once an opt-out takes place, you won't be able to recover your DID or conduct any operations on your DID. This step is a one-way operation, and you need to opt in again, which results in a new environment being created.
+Once an opt-out takes place, you can't recover your DID or conduct any operations on your DID. This step is a one-way operation and you need to onboard again. Onboarding again results in a new environment being created.
## Effect on existing verifiable credentials
-All verifiable credentials already issued will continue to exist. They won't be cryptographically invalidated as your DID will remain resolvable through ION.
-However, when relying parties call the status API, they will always receive back a failure message.
+All verifiable credentials already issued will continue to exist. For the ION trust system, they will not be cryptographically invalidated as your DID remain resolvable through ION.
+However, when relying parties call the status API, they always receive a failure message.
## How to opt-out from the Microsoft Entra Verified ID service?
active-directory Remote Onboarding New Employees Id Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/remote-onboarding-new-employees-id-verification.md
+
+ Title: Onboard new remote employees using ID verification
+description: A design pattern describing how to onboard new employees remotely
++++++ Last updated : 04/06/2023++++
+# Onboard new remote employees using ID verification
+
+Enterprises onboarding users face significant challenges onboarding remote users who are not yet inside the trust boundary. Microsoft Entra Verified ID can help customers facing these scenarios because it can use government issued ID based attestations as a way to establish trust.
+
+## When to use this pattern
+
+- You have a modern Human resources (HR) system with API support.
+- Your HR system allows programmatic integration to query the HR system to do a reliable matching of user profiles.
+- Your organization has already started their passwordless journey.
+
+## Solution
+
+1. A custom portal for new employee onboarding.
+
+2. A backend job provides new hires with a uniquely identifiable link to the employee onboarding portal from (A) that represents the new hireΓÇÖs specific process. For this use case, the account for the new hire should already be provisioned in Azure AD. Consider using [Lifecycle Workflows](../governance/what-are-lifecycle-workflows.md) as the triggering point of this flow.
+
+3. New hires select the link to the portal in (A) above and are guided through a wizard-like experience:
+ 1. New Hires are redirected to acquire a verified ID from the Identity verification partner (also referred to IDV. To learn more about the identity verification partners: <https://aka.ms/verifiedidisv>)
+ 2. New Hires present the Verified ID acquired in Step 1
+ 3. System receives the claims from identity verification partner, looks up the user account for the new hire and performs the validation.
+ 4. System executes the onboarding logic to locate the Azure AD account of the user, and [generate a temporary access pass using MS Graph](/graph/api/resources/temporaryaccesspassauthenticationmethod?view=graph-rest-1.0&preserve-view=true).
+
+![Diagram showing a high-level flow.](media/remote-onboarding-new-employees-id-verification/high-level-flow-diagram.png)
+
+## Issues and considerations
+
+- The link used to initiate the process needs to meet some criteria:
+ - The link should be specific to each remote employee.
+ - The link should be valid for only a short period of time.
+ - It should become invalid after a user finishes going through the flow.
+ - The link should be designed to correlate to a unique HR record identifier
+- An Azure AD account should be pre-created for every user. The account should be used as part of the site's request validation process.
+- Administrators frequently deal with discrepancies between users' information held in a company's IT systems, like human resource applications or identity management solutions, and the information the users provide. For example, an employee might have ΓÇ£JamesΓÇ¥ as their first name but their profile has their name as ΓÇ£JimΓÇ¥. For those scenarios:
+ 1. At the beginning of the HR process, candidates must use their name exactly as it appears in government issued documents. Taking this approach simplifies validation logic.
+ 1. Design validation logic to include attributes that are more likely to have an exact match against the HR system. Common attributes include street address, date of birth, nationality, national identification number (if applicable), in addition to first and last name.
+ 1. As a fallback, plan for human review to work through ambiguous/non-conclusive results. This process might include temporarily storing the attributes presented in the VC, phone call with the user, etc.
+- Multinational organizations, may need to work with different identity proofing partners based on the region of the user.
+- Assume that the initial interaction between the user and the onboarding partner is untrusted. The onboarding portal should generate detailed logs for all requests processed that could be used for auditing purposes.
+
+## Additional resources
+
+- Public architecture document for generalized account onboarding: [Plan your Microsoft Entra Verified ID verification solution](plan-verification-solution.md#account-onboarding)
active-directory Using Authenticator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-authenticator.md
+
+ Title: Tutorial - Set up and use Microsoft Authenticator with VerifiedID
+description: In this tutorial, you learn how to install and use Microsoft Authenticator for VerifiedID
++++++ Last updated : 04/06/2022
+# Customer intent: As an enterprise, we want to enable customers to manage information about themselves by using verifiable credentials.
+++
+# Using the Microsoft Authenticator with Verified ID
++
+In this tutorial, you learn how to install the Microsoft Authenticator app and use it for the first time with Verified ID. You use the public end to end demo webapp to issue a verifiable credential to the Authenticator and present verifiable credentials from the Authenticator.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Install Microsoft Authenticator on your mobile device
+> - Use the Microsoft Authenticator for the first time
+> - Issue a verifiable credential from the public end to end demo webapp to the Authenticator
+> - Present a verifiable credential from the Authenticator to the public end to end demo webapp
+> - View activity details of when and where you've presented your verifiable credentials
+> - Delete a verifiable credential from your Authenticator
+
+## Install Microsoft Authenticator on your mobile device
+
+If you already have Microsoft Authenticator installed, you can skip this section. If you need to install it, follow these instructions, but make sure you install **Microsoft Authenticator** and not another app with the name Authenticator, as there are multiple apps sharing that name.
+
+- On iPhone, open the [App Store](https://support.apple.com/HT204266) app and search for **Microsoft Authenticator** and install the app.
+ ![Screenshot of Apple App Store.](media/using-authenticator/apple-appstore.png)
+
+- On Android, open the [Google Play](https://play.google.com/about/howplayworks/) app and search for **Microsoft Authenticator** and install the app.
+ ![Screenshot of Google Play.](media/using-authenticator/google-play.png)
+
+## Use the Microsoft Authenticator for the first time
+
+Using the Authenticator for the first time presents a set of screens that you have to navigate through in order to be ready to work with Verified ID.
+
+1. Open the Authenticator app and press **Accept** on the first screen.
+
+ ![Screenshot of Accept screen.](media/using-authenticator/accept-screen.png)
+
+2. Select your choice of sharing app usage data and press **Continue**.
+
+ ![Screenshot of app usage data screen.](media/using-authenticator/app-usage-sharing-screen.png)
+
+3. Press **Skip** in the upper right corner of the screen asking you to **Sign in with Microsoft**.
+
+ ![Screenshot of Skip Sign in with Microsoft screen.](media/using-authenticator/skip-signin-with-microsoft-screen.png)
+
+## Issue a verifiable credential
+
+When the Microsoft Authenticator app is installed and ready, you use the public end to end demo webapp to issue your first verifiable credential onto the Authenticator.
+
+1. Open [end to end demo](http://woodgroveemployee.azurewebsites.net/) in your browser
+ 1. Enter your First Name and Last Name and press **Next**
+ 1. Select **Verify with True Identity**
+ 1. Click **Take a selfie** and **Upload government issued ID**. The demo uses simulated data and you don't need to provide a real selfie or an ID.
+ 1. Click **Next** and **OK**
+2. Open your Microsoft Authenticator app
+3. Select **Verified IDs** in the lower right corner on the start screen
+4. Select **Scan QR code** button. This screen only shows if you have no verifiable credential cards in the app.
+
+ ![Screenshot of scan qr code screen.](media/using-authenticator/scan-qr-code-screen.png)
+
+5. If this is the first time you scan a QR code, the mobile device notifies you that the Authenticator is trying to access the camera. Select **OK** to continue scanning the QR code.
+
+ ![Screenshot of access camera screen.](media/using-authenticator/access-camera-screen.png)
+
+6. Scan the QR code and enter the pin code in the Authenticator and select **Next**. The pin code is shown in the browser page.
+
+ ![Screenshot of entering pin code screen.](media/using-authenticator/enter-pin-code-screen.png)
+
+7. Select **Add** to add the verifiable credential card to the Authenticator wallet.
+
+ ![Screenshot of add VC card screen.](media/using-authenticator/add-card-screen.png)
+
+8. Select **Return to Woodgrove** in the browser
+
+Note the following.
+
+- After you've scanned the QR code, the Authenticator displays who the issuing party is for the verifiable credential. In the above screenshots, you can see that it's **True Identity** and that the issuance request comes from a verified domain **did.woodgrovedemo.com**. As a user, it is your choice if you trust this issuing party.
+- Not all issuance requests involve a pin code. It's up to the issuing party to decide to include the use of a pin code.
+- The purpose of using a pin code is to add an extra level of security of the issuance process so only you, the intended recipient, can issue the verifiable credential.
+- The demo displays the pin code in the browser page next to the QR code. In a real world scenario, the pin code wouldn't be displayed there, but instead be given to you in some alternate way, like in an email or an SMS text message.
+
+## Present a verifiable credential
+
+In learning how to present a verifiable credential, you continue where you left off above. Here, you'll present the True Identity verifiable credential to the demo webapp. Make sure you have a **True Identity** verifiable credential in the Authenticator before continuing.
+
+1. If you're continuing where you left off, select **Access personalized portal** in the end to end demo webapp. If you have the True Identity verifiable credential in the Authenticator but closed the browser, then first select **I've been verified** in the [end to end](https://woodgroveemployee.azurewebsites.net/verification) demo webapp and then select **Access personalized portal**. Selecting **Access personalized portal** will present a QR code in the webpage.
+2. Open your Microsoft Authenticator app
+3. Select **Verified IDs** in the lower right corner on the start screen
+4. Press the **QR code symbol** in the top right corner to turn on the camera and scan the QR code.
+5. Select **Share** in the Authenticator to present the verifiable credential to the end to end demo webapp.
+
+ ![Screenshot of sharing a VC card screen.](media/using-authenticator/share-card-screen.png)
+
+6. In the browser, click the **Continue onboarding** button
+
+Note the following.
+
+- After you've scanned the QR code, the Authenticator will display who the verifying party is for the verifiable credential. In the above screenshots, you can see that it is **True Identity** and that the issuance request comes from a verified domain **did.woodgrovedemo.com**. As a user, it is your choice if you trust this party and want to share your credential with them.
+- If the presentation request does not match any of the verifiable credentials you have in the Authenticator, you get a message that you haven't the credentials requested.
+- If the presentation request matches multiple verifiable credentials you have in the Authenticator, you are asked to pick the one you want to share.
+- If you have an expired verifiable credential that matches the presentation request, you get a message that it's expired and you can't share the credentials requested.
+
+## Continue onboarding in the end to end demo
+
+The end to end demo continues with onboarding you as a new employee to the Woodgrove company. Continuing with the demo repeats the process of issuance and presentation in the Authenticator. Follow these steps to continue the onboarding process.
+
+### Issue yourself a Woodgrove employee verifiable credential
+
+1. Select **Retrieve my Verified ID** in the browser. This displays a QR code in the webpage.
+1. Press the **QR code symbol** in the top right corner of the Authenticator to turn on the camera
+1. Scan the QR code and enter the pin code in the Authenticator and select **Next**. The pin code is shown in the browser page.
+1. Select **Add** to add the verifiable credential card to the Authenticator wallet.
+
+### Use your Woodgrove employee verifiable credential to get a laptop
+
+1. Select **Visit Proseware** in the browser.
+1. Select **Access discounts** in the browser.
+1. Select **Verify my Employee Credential** in the browser.
+1. Press the **QR code symbol** in the top right corner of the Authenticator to turn on the camera and scan the QR code.
+1. Select **Share** in the Authenticator to present the verifiable credential to the **Proseware** webapp.
+1. Notice that a Woodgrove employee discounts are applied to the prices when Proseware have verified your credentials.
+
+## View activity details of when and where you have presented your verifiable credentials
+
+The Microsoft Authenticator keeps records of the activity for your verifiable credentials.
+If you select a credential card and then switch to view **Activity**, you see the activity list for your credential sorted in most recently used order. For your True Identity card, you see two entries, where the first is when it was issued and the second that the credential was shared with Woodgrove.
+
+![Screenshot of VC activity screen.](media/using-authenticator/card-activity-screen.png)
+
+## Delete a verifiable credential from your Authenticator
+
+You can delete a verifiable credential from the Microsoft Authenticator.
+Click on the credential card you want to delete to view its details. Then click on the trash can in the upper right corner and confirm the deletion prompt.
+
+![Screenshot of delete VC screen.](media/using-authenticator/delete-card-screen.png)
+
+Deleting a verifiable credential from the Authenticator is an irrevocable process and there is no recycle bin to bring it back from. If you have deleted a credential, you must go through the issuance process again.
+
+## How do I see the version number of the Microsoft Authenticator app
+
+1. On iPhone, click on the three vertical bars in top left corner
+1. On Android, click on the three vertical dots in the top right corner
+1. Select ΓÇ£HelpΓÇ¥ to display your version number
+
+## How to provide diagnostics data to a Microsoft Support representative
+
+If during a Microsoft support case you are asked to provide diagnostics data from the Microsoft Authenticator app, follow these steps.
+
+1. On iPhone, click on the three vertical bars in top left corner
+1. On Android, click on the three vertical dots in the top right corner
+1. Select ΓÇ£Send FeedbackΓÇ¥ and then ΓÇ£Having trouble?ΓÇ¥
+1. Select ΓÇ£Select an optionΓÇ¥ and select ΓÇ£Verified IDsΓÇ¥
+1. Enter some text in the ΓÇ£Describe the issueΓÇ¥ textbox
+1. Click ΓÇ£SendΓÇ¥ on iPhone or the arrow on Android in the top right corner
+
+## Next steps
+
+Learn how to [configure your tenant for Microsoft Entra Verified ID](verifiable-credentials-configure-tenant.md).
aks Dapr Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-workflow.md
+
+ Title: Deploy and run workflows with the Dapr extension for Azure Kubernetes Service (AKS)
+description: Learn how to deploy and run Dapr Workflow on your Azure Kubernetes Service (AKS) clusters via the Dapr extension.
+++++ Last updated : 04/05/2023+++
+# Deploy and run workflows with the Dapr extension for Azure Kubernetes Service (AKS)
+
+With Dapr Workflow, you can easily orchestrate messaging, state management, and failure-handling logic across various microservices. Dapr Workflow can help you create long-running, fault-tolerant, and stateful applications.
+
+In this guide, you use the [provided order processing workflow example][dapr-workflow-sample] to:
+
+> [!div class="checklist"]
+> - Create an Azure Container Registry and an AKS cluster for this sample.
+> - Install the Dapr extension on your AKS cluster.
+> - Deploy the sample application to AKS.
+> - Start and query workflow instances using HTTP API calls.
+
+The workflow example is an ASP.NET Core project with:
+- A [`Program.cs` file][dapr-program] that contains the setup of the app, including the registration of the workflow and workflow activities.
+- Workflow definitions found in the [`Workflows` directory][dapr-workflow-dir].
+- Workflow activity definitions found in the [`Activities` directory][dapr-activities-dir].
+
+> [!NOTE]
+> Dapr Workflow is currently an [alpha][dapr-workflow-alpha] feature and is on a self-service, opt-in basis. Alpha Dapr APIs and components are provided "as is" and "as available," and are continually evolving as they move toward stable status. Alpha APIs and components are not covered by customer support.
+
+## Prerequisites
+
+- An [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) with Owner or Admin role.
+- The latest version of the [Azure CLI][install-cli]
+- Latest [Docker][docker]
+- Latest [Helm][helm]
+
+## Set up the environment
+
+### Clone the sample project
+
+Clone the example workflow application.
+
+```sh
+git clone https://github.com/Azure/dapr-workflows-aks-sample.git
+```
+
+Navigate to the sample's root directory.
+
+```sh
+cd dapr-workflows-aks-sample
+```
+
+### Create a Kubernetes cluster
+
+Create a resource group to hold the AKS cluster.
+
+```sh
+az group create --name myResourceGroup --location eastus
+```
+
+Create an AKS cluster.
+
+```sh
+az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --generate-ssh-keys
+```
+
+[Make sure `kubectl` is installed and pointed to your AKS cluster.][kubectl] If you use [the Azure Cloud Shell][az-cloud-shell], `kubectl` is already installed.
+
+For more information, see the [Deploy an AKS cluster][cluster] tutorial.
+
+## Deploy the application to AKS
+
+### Install Dapr on your AKS cluster
+
+Install the Dapr extension on your AKS cluster. Before you start, make sure you've:
+- [Installed or updated the `k8s-extension`][k8s-ext].
+- [Registered the `Microsoft.KubernetesConfiguration` service provider][k8s-sp]
+
+```sh
+az k8s-extension create --cluster-type managedClusters --cluster-name myAKSCluster --resource-group myResourceGroup --name dapr --extension-type Microsoft.Dapr
+```
+
+Verify Dapr has been installed by running the following command:
+
+```sh
+kubectl get pods -A
+```
+
+### Deploy the Redis Actor state store component
+
+Navigate to the `Deploy` directory in your forked version of the sample:
+
+```sh
+cd Deploy
+```
+
+Deploy the Redis component:
+
+```sh
+helm repo add bitnami https://charts.bitnami.com/bitnami
+helm install redis bitnami/redis
+kubectl apply -f redis.yaml
+```
+
+### Run the application
+
+Once you've deployed Redis, deploy the application to AKS:
+
+```sh
+kubectl apply -f deployment.yaml
+```
+
+Expose the Dapr sidecar and the sample app:
+
+```sh
+kubectl apply -f service.yaml
+export APP_URL=$(kubectl get svc/workflows-sample -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+export DAPR_URL=$(kubectl get svc/workflows-sample-dapr -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+```
+
+Verify that the above commands were exported:
+
+```sh
+echo $APP_URL
+echo $DAPR_URL
+```
+
+## Start the workflow
+
+Now that the application and Dapr have been deployed to the AKS cluster, you can now start and query workflow instances. Begin by making an API call to the sample app to restock items in the inventory:
+
+```sh
+curl -X GET $APP_URL/stock/restock
+```
+
+Start the workflow:
+
+```sh
+curl -X POST $DAPR_URL/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/1234/start \
+ -H "Content-Type: application/json" \
+ -d '{ "input" : {"Name": "Paperclips", "TotalCost": 99.95, "Quantity": 1}}'
+```
+
+Expected output:
+
+```json
+{"instance_id":"1234"}
+```
+
+Check the workflow status:
+
+```sh
+curl -X GET $DAPR_URL/v1.0-alpha1/workflows/dapr/OrderProcessingWorkflow/1234
+```
+
+Expected output:
+
+```json
+{
+ "WFInfo":
+ {
+ "instance_id":"1234"
+ },
+ "start_time":"2023-03-03T19:19:16Z",
+ "metadata":
+ {
+ "dapr.workflow.custom_status":"",
+ "dapr.workflow.input":"{\"Name\":\"Paperclips\",\"Quantity\":1,\"TotalCost\":99.95}",
+ "dapr.workflow.last_updated":"2023-03-03T19:19:33Z",
+ "dapr.workflow.name":"OrderProcessingWorkflow",
+ "dapr.workflow.output":"{\"Processed\":true}",
+ "dapr.workflow.runtime_status":"COMPLETED"
+ }
+}
+```
+
+Notice that the workflow status is marked as completed.
+
+## Next steps
+
+[Learn how to add configuration settings to the Dapr extension on your AKS cluster][dapr-config].
+
+<!-- Links Internal -->
+[deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
+[install-cli]: /cli/azure/install-azure-cli
+[k8s-ext]: ./dapr.md#set-up-the-azure-cli-extension-for-cluster-extensions
+[cluster]: ./tutorial-kubernetes-deploy-cluster.md
+[k8s-sp]: ./dapr.md#register-the-kubernetesconfiguration-service-provider
+[dapr-config]: ./dapr-settings.md
+[az-cloud-shell]: ./learn/quick-kubernetes-deploy-powershell.md#azure-cloud-shell
+[kubectl]: ./tutorial-kubernetes-deploy-cluster.md#connect-to-cluster-using-kubectl
+
+<!-- Links External -->
+[dapr-workflow-sample]: https://github.com/Azure/dapr-workflows-aks-sample
+[dapr-program]: https://github.com/Azure/dapr-workflows-aks-sample/blob/main/Program.cs
+[dapr-workflow-dir]: https://github.com/Azure/dapr-workflows-aks-sample/tree/main/Workflows
+[dapr-activities-dir]: https://github.com/Azure/dapr-workflows-aks-sample/tree/main/Activities
+[dapr-workflow-alpha]: https://docs.dapr.io/operations/support/support-preview-features/#current-preview-features
+[deployment-yaml]: https://github.com/Azure/dapr-workflows-aks-sample/blob/main/Deploy/deployment.yaml
+[docker]: https://docs.docker.com/get-docker/
+[helm]: https://helm.sh/docs/intro/install/
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 01/06/2023 Last updated : 03/06/2023 # Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
-[Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. Applying the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps solve problems around
+As a portable, event-driven runtime, [Dapr](https://dapr.io/) simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. With its sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps solve problems around
- Calling other services reliably and securely - Building event-driven apps with pub-sub - Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a VM)
-[By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md), you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
+[Using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md) eliminates the overhead of:
+- Downloading Dapr tooling
+- Manually installing and managing the runtime on your AKS cluster
+
+Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
> [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page. ## How it works
-The Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS or Arc-enabled Kubernetes cluster. This will create:
+The Dapr extension uses the Azure CLI to provision the Dapr control plane on your AKS or Arc-enabled Kubernetes cluster, creating the following Dapr
-- **dapr-operator**: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.)-- **dapr-sidecar-injector**: Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values.-- **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods-- **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information, read the [security overview][dapr-security].
+| Dapr service | Description |
+| | -- |
+| `dapr-operator` | Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) |
+| `dapr-sidecar-injector` | Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. |
+| `dapr-placement` | Used for actors only. Creates mapping tables that map actor instances to pods. |
+| `dapr-sentry` | Manages mTLS between services and acts as a certificate authority. For more information, read the [security overview][dapr-security]. |
Once Dapr is installed on your cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, see the [Dapr building blocks overview][building-blocks-concepts].
Global Azure cloud is supported with Arc support on the following regions:
### Set up the Azure CLI extension for cluster extensions
-You'll need the `k8s-extension` Azure CLI extension. Install by running the following commands:
+Install the `k8s-extension` Azure CLI extension by running the following commands:
```azurecli-interactive az extension add --name k8s-extension
az extension update --name k8s-extension
### Register the `KubernetesConfiguration` service provider
-If you have not previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
+If you haven't previously used cluster extensions, you may need to register the service provider with your subscription. You can check the status of the provider registration using the [az provider list][az-provider-list] command, as shown in the following example:
```azurecli-interactive az provider list --query "[?contains(namespace,'Microsoft.KubernetesConfiguration')]" -o table
For example:
> [!NOTE] > Dapr is supported with a rolling window, including only the current and previous versions. It is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr, you may have to do intermediate upgrades to get to a supported version.
-The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension will install the latest version of Dapr. For example, to use Dapr X.X.X:
+The same command-line argument is used for installing a specific version of Dapr or rolling back to a previous version. Set `--auto-upgrade-minor-version` to `false` and `--version` to the version of Dapr you wish to install. If the `version` parameter is omitted, the extension installs the latest version of Dapr. For example, to use Dapr X.X.X:
```azurecli az k8s-extension create --cluster-type managedClusters \
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
## Next Steps -- Learn more about [additional settings and preferences you can set on the Dapr extension][dapr-settings].
+- Learn more about [extra settings and preferences you can set on the Dapr extension][dapr-settings].
- Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].
+- Try out [Dapr Workflow on your Dapr extension for AKS][dapr-workflow]
<!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[install-cli]: /cli/azure/install-azure-cli [dapr-migration]: ./dapr-migration.md [dapr-settings]: ./dapr-settings.md
+[dapr-workflow]: ./dapr-workflow.md
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Title: Use multiple node pools in Azure Kubernetes Service (AKS)
description: Learn how to create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) Previously updated : 05/16/2022 Last updated : 03/11/2023 # Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
It takes a few minutes to delete the nodes and the node pool.
## Associate capacity reservation groups to node pools (preview) -
-As your application workloads demands, you may associate node pools to capacity reservation groups created prior. This ensures guaranteed capacity is allocated for your node pools.
+As your application workloads demands, you may associate node pools to capacity reservation groups already created. This ensures guaranteed capacity is allocated for your node pools.
For more information on the capacity reservation groups, please refer to [Capacity Reservation Groups][capacity-reservation-groups].
-Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag" The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated.
+### Register preview feature
++
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+Register the `CapacityReservationGroupPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Manage capacity reservations
+
+Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag". The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated.
```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG ```
-Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
+Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
```azurecli-interactive az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
The add-on deploys the following components:
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- [Azure CLI installed](/cli/azure/install-azure-cli).
+- Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
- An Azure Key Vault to store certificates.-- A DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md).
+- (Optional) A DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md).
### Install the `aks-preview` Azure CLI extension
az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyv
## Retrieve the add-on's managed identity object ID
-Retrieve user managed identity object ID for the add-on. This identity is used in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. Provide your *`<ResourceGroupName>`*, *`<ClusterName>`*, and *`<Location>`* in the script to retrieve the managed identity's object ID.
+Retrieve user managed identity object ID for the add-on. This identity is used in the next steps to grant permissions to manage the Azure DNS zone and retrieve certificates from the Azure Key Vault. Provide your *`<ResourceGroupName>`* and *`<ClusterName>`* in the script to retrieve the managed identity's object ID.
```azurecli-interactive # Provide values for your environment RGNAME=<ResourceGroupName> CLUSTERNAME=<ClusterName>
-LOCATION=<Location>
-
-# Retrieve user managed identity object ID for the add-on
-SUBSCRIPTION_ID=$(az account show --query id --output tsv)
-MANAGEDIDENTITYNAME="webapprouting-${CLUSTERNAME}"
-MCRGNAME=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query nodeResourceGroup -o tsv)
-USERMANAGEDIDENTITY_RESOURCEID="/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${MCRGNAME}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/${MANAGEDIDENTITYNAME}"
-MANAGEDIDENTITY_OBJECTID=$(az resource show --id $USERMANAGEDIDENTITY_RESOURCEID --query "properties.principalId" -o tsv | tr -d '[:space:]')
+MANAGEDIDENTITY_OBJECTID=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query ingressProfile.webAppRouting.identity.objectId -o tsv)
``` ## Configure the add-on to use Azure DNS to manage creating DNS zones
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workl
description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with an Azure AD workload identity (preview). Previously updated : 03/14/2023 Last updated : 04/12/2023 # Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster
This article assumes you have a basic understanding of Kubernetes concepts. For
- This article requires version 2.40.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
+- The identity you're using to create your cluster has the appropriate minimum permissions. For more information about access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].
- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account][az-account] command.
Copy and paste the following multi-line input in the Azure CLI, and update the v
```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" export SERVICE_ACCOUNT_NAMESPACE="my-namespace"
+export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group "${RESOURCE_GROUP}" --name "${UAID}" --query 'clientId' -otsv)"
cat <<EOF | kubectl apply -f - apiVersion: v1
kind: ServiceAccount
metadata: annotations: azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"
- labels:
- azure.workload.identity/use: "true"
name: "${SERVICE_ACCOUNT_NAME}" namespace: "${SERVICE_ACCOUNT_NAMESPACE}" EOF
az identity federated-credential create --name myfederatedIdentity --identity-na
## Deploy your application
+When you deploy your application pods, the manifest should reference the service account created in the **Create Kubernetes service account** step. The following manifest shows how to reference the account, specifically *metadata\namespace* and *spec\serviceAccountName* properties:
+
+```yml
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Pod
+metadata:
+ name: quick-start
+ namespace: SERVICE_ACCOUNT_NAMESPACE
+ labels:
+ azure.workload.identity/use: "true"
+spec:
+ serviceAccountName: workload-identity-sa
+EOF
+```
+ > [!IMPORTANT] > Ensure your application pods using workload identity have added the following label [azure.workload.identity/use: "true"] to your running pods/deployments, otherwise the pods will fail once restarted.
az identity federated-credential create --name myfederatedIdentity --identity-na
kubectl apply -f <your application> ```
+To check whether all properties are injected properly by the webhook, use the [kubectl describe][kubectl-describe] command:
+
+```bash
+kubectl describe pod containerName
+```
+
+To verify that pod is able to get a token and access the resource, use the kubectl logs command:
+
+```bash
+kubectl logs containerName
+```
+ ## Optional - Grant permissions to access Azure Key Vault This step is necessary if you need to access secrets, keys, and certificates that are mounted in Azure Key Vault from a pod. Perform the following steps to configure access with a managed identity. These steps assume you have an Azure Key Vault already created and configured in your subscription. If you don't have one, see [Create an Azure Key Vault using the Azure CLI][create-key-vault-azure-cli].
az aks update --resource-group myResourceGroup --name myAKSCluster --enable-work
In this article, you deployed a Kubernetes cluster and configured it to use a workload identity in preparation for application workloads to authenticate with that credential. Now you're ready to deploy your application and configure it to use the workload identity with the latest version of the [Azure Identity][azure-identity-libraries] client library. If you can't rewrite your application to use the latest client library version, you can [set up your application pod][workload-identity-migration] to authenticate using managed identity with workload identity as a short-term migration solution. <!-- EXTERNAL LINKS -->
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
<!-- INTERNAL LINKS --> [kubernetes-concepts]: concepts-clusters-workloads.md
api-management Api Management Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-features.md
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct set of features and per unit [capacity](api-management-capacity.md). The following table summarizes the key features available in each of the tiers. Some features might work differently or have different capabilities depending on the tier. In such cases the differences are called out in the documentation articles describing these individual features. > [!IMPORTANT]
-> Please note the Developer tier is for non-production use cases and evaluations. It does not offer SLA.
+> * The Developer tier is for non-production use cases and evaluations. It doesn't offer SLA.
+> * The Consumption tier isn't available in the US Government cloud or the Azure China cloud.
| Feature | Consumption | Developer | Basic | Standard | Premium | | -- | -- | | -- | -- | - |
Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
<sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/> <sup>3</sup> See [Gateway overview](api-management-gateways-overview.md#feature-comparison-managed-versus-self-hosted-gateways) for a feature comparison of managed versus self-hosted gateways. In the Developer tier self-hosted gateways are limited to a single gateway node. <br/> <sup>4</sup> See [Gateway overview](api-management-gateways-overview.md#policies) for differences in policy support in the dedicated, consumption, and self-hosted gateways. <br/>
-<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
+<sup>5</sup> GraphQL subscriptions aren't supported in the Consumption tier.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
If you restart the instance and the restart process fails, you will then be give
Windows applications will also have the option to view processes via the Process Explorer. This gives you further insight on the instance's processes including thread count, private memory, and total CPU time.
+## Diagnostic information collection
+For Windows applications, you have the option to collect diagnostic information in the Health Check tab. Enabling diagnostic collection will add an auto-heal rule that creates memory dumps for unhealthy instances and saves it to a designated storage account. Enabling this option will change auto-heal configurations. If there are existing auto-heal rules, we recommend setting this up through App Service diagnostics.
+
+Once diagnostic collection is enabled, you can create or choose an existing storage account for your files. You can only select storage accounts in the same region as your application. Keep in mind that saving will restart your application. After saving, if your site instances are found to be unhealthy after continuous pings, you can go to your storage account resource and view the memory dumps.
+ ## Monitoring
application-gateway Configure Key Vault Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-key-vault-portal.md
Last updated 10/01/2021-+ # Configure TLS termination with Key Vault certificates using Azure portal
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
Last updated 07/14/2022 -+ #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can host multiple sites.
application-gateway Create Ssl Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-ssl-portal.md
Last updated 06/30/2022 -+ #Customer intent: As an IT administrator, I want to use the Azure portal to configure Application Gateway with TLS termination so I can secure my application traffic.
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
Last updated 07/08/2022 -+ #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can route my app traffic based on path-based routing rules.
application-gateway Rewrite Http Headers Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-http-headers-url.md
You can use a condition to evaluate whether a specified variable is present, whe
### Pattern Matching
-Application Gateway uses regular expressions for pattern matching in the condition. You can use the [Perl Compatible Regular Expressions (PCRE) library](https://www.pcre.org/) to set up regular expression pattern matching in the conditions. To learn about regular expression syntax, see the [Perl regular expressions main page](https://perldoc.perl.org/perlre.html).
+Application Gateway uses regular expressions for pattern matching in the condition. You should use Regular Expression 2 (RE2) compatible expressions when writing your conditions. If you are running an Application Gateway Web Application Firewall (WAF) with Core Rule Set 3.1 or earlier, you may run into issues when using [Perl Compatible Regular Expressions (PCRE)](https://www.pcre.org/) while doing lookahead and lookbehind (negative or positive) assertions.
+ ### Capturing
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
Last updated 07/15/2022 -+ # Tutorial: Enable application gateway ingress controller add-on for an existing AKS cluster with an existing application gateway
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Last updated 07/15/2022 -+ # Tutorial: Enable the ingress controller add-on for a new AKS cluster with a new application gateway instance
automation Automation Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-connections.md
Title: Manage connections in Azure Automation
description: This article tells how to manage Azure Automation connections to external services or applications and how to work with them in runbooks. Previously updated : 12/22/2020 Last updated : 04/12/2023
When you create a connection, you must specify a connection type. The connection
Azure Automation makes the following built-in connection types available: * `Azure` - Represents a connection used to manage classic resources.
-* `AzureServicePrincipal` - Represents a connection used by the Azure Run As account.
-* `AzureClassicCertificate` - Represents a connection used by the classic Azure Run As account.
-
-In most cases, you don't need to create a connection resource because it is created when you create a [Run As account](automation-security-overview.md).
+* `AzureServicePrincipal` - Represents a connection used to manage resources in Azure using a service principal.
+* `AzureClassicCertificate` - This connection type is used to manage resources in Azure that were created using the classic deployment model that doesn't support Service Principal authentication.
## PowerShell cmdlets to access connections
To create a new connection in the Azure portal:
Create a new connection with Windows PowerShell using the `New-AzAutomationConnection` cmdlet. This cmdlet has a `ConnectionFieldValues` parameter that expects a hashtable defining values for each of the properties defined by the connection type.
-You can use the following example commands as an alternative to creating the Run As account from the portal to create a new connection asset.
+You can use the following example commands to create a connection that can be used for authentication using Azure Service Principal.
```powershell
-$ConnectionAssetName = "AzureRunAsConnection"
+$ConnectionAssetName = "AzureConnection"
$ConnectionFieldValues = @{"ApplicationId" = $Application.ApplicationId; "TenantId" = $TenantID.TenantId; "CertificateThumbprint" = $Cert.Thumbprint; "SubscriptionId" = $SubscriptionId} New-AzAutomationConnection -ResourceGroupName $ResourceGroup -AutomationAccountName $AutomationAccountName -Name $ConnectionAssetName -ConnectionTypeName AzureServicePrincipal -ConnectionFieldValues $ConnectionFieldValues ```
-When you create your Automation account, it includes several global modules by default, along with the connection type `AzureServicePrincipal` to create the `AzureRunAsConnection` connection asset. If you try to create a new connection asset to connect to a service or application with a different authentication method, the operation fails because the connection type is not already defined in your Automation account. For more information on creating your own connection type for a custom module, see [Add a connection type](#add-a-connection-type).
+If you try to create a new connection asset to connect to a service or application with a different authentication method, the operation fails because the connection type is not already defined in your Automation account. For more information on creating your own connection type for a custom module, see [Add a connection type](#add-a-connection-type).
## Add a connection type
Retrieve a connection in a runbook or DSC configuration with the internal `Get-A
# [PowerShell](#tab/azure-powershell)
-The following example shows how to use the Run As account to authenticate with Azure Resource Manager resources in your runbook. It uses a connection asset representing the Run As account, which references the certificate-based service principal.
+The following example shows how to use a connection to authenticate with Azure Resource Manager resources in your runbook. It uses a connection asset, which references the certificate-based service principal.
```powershell
-$Conn = Get-AutomationConnection -Name AzureRunAsConnection
+$Conn = Get-AutomationConnection -Name AzureConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint ``` # [Python](#tab/python2)
-The following example shows how to authenticate using the Run As connection in a Python 2 and 3 runbook.
+The following example shows how to authenticate using connection in a Python 2 and 3 runbook.
```python """ Tutorial to show how to authenticate against Azure resource manager resources """ import azure.mgmt.resource import automationassets
-def get_automation_runas_credential(runas_connection):
+def get_automation_credential(azure_connection):
""" Returns credentials to authenticate against Azure resource manager """ from OpenSSL import crypto from msrestazure import azure_active_directory import adal
- # Get the Azure Automation Run As service principal certificate
- cert = automationassets.get_automation_certificate("AzureRunAsCertificate")
+ # Get the Azure Automation service principal certificate
+ cert = automationassets.get_automation_certificate("MyCertificate")
pks12_cert = crypto.load_pkcs12(cert) pem_pkey = crypto.dump_privatekey( crypto.FILETYPE_PEM, pks12_cert.get_privatekey())
- # Get Run As connection information for the Azure Automation service principal
- application_id = runas_connection["ApplicationId"]
- thumbprint = runas_connection["CertificateThumbprint"]
- tenant_id = runas_connection["TenantId"]
+ # Get information for the Azure Automation service principal
+ application_id = my_connection["ApplicationId"]
+ thumbprint = my_connection["CertificateThumbprint"]
+ tenant_id = my_connection["TenantId"]
# Authenticate with service principal certificate resource = "https://management.core.windows.net/"
def get_automation_runas_credential(runas_connection):
)
-# Authenticate to Azure using the Azure Automation Run As service principal
-runas_connection = automationassets.get_automation_connection(
- "AzureRunAsConnection")
-azure_credential = get_automation_runas_credential(runas_connection)
+# Authenticate to Azure using the Azure Automation service principal
+azure_connection = automationassets.get_automation_connection(
+ "AzureConnection")
+azure_credential = get_automation_credential(azure_connection)
```
You can add an activity for the internal `Get-AutomationConnection` cmdlet to a
![add to canvas](media/automation-connections/connection-add-canvas.png)
-The following image shows an example of using a connection object in a graphical runbook. This example uses the `Constant value` data set for the `Get RunAs Connection` activity, which uses a connection object for authentication. A [pipeline link](automation-graphical-authoring-intro.md#use-links-for-workflow) is used here since the `ServicePrincipalCertificate` parameter set is expecting a single object.
+The following image shows an example of using a connection object in a graphical runbook.
![get connections](media/automation-connections/automation-get-connection-object.png)
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 03/30/2023 Last updated : 04/12/2023
To install and configure a Linux Hybrid Runbook Worker, perform the following st
## Turn off signature validation
-By default, Linux Hybrid Runbook Workers require signature validation. If you run an unsigned runbook against a worker, you see a `Signature validation failed` error. To turn off signature validation, run the following command. Replace the second parameter with your Log Analytics workspace ID.
+By default, Linux Hybrid Runbook Workers require signature validation. If you run an unsigned runbook against a worker, you see a `Signature validation failed` error. To turn off signature validation, run the following command as root. Replace the second parameter with your Log Analytics workspace ID.
```bash sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/scripts/require_runbook_signature.py --false <logAnalyticsworkspaceId>
sudo python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/
## <a name="remove-linux-hybrid-runbook-worker"></a>Remove the Hybrid Runbook Worker
-Run the following commands on agent-based Linux Hybrid Worker:
+Run the following commands as root on the agent-based Linux Hybrid Worker:
1. ```python sudo bash
automation Automation Powershell Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-powershell-workflow.md
Title: Learn PowerShell Workflow for Azure Automation
description: This article teaches you the differences between PowerShell Workflow and PowerShell and concepts applicable to Automation runbooks. Previously updated : 10/16/2022 Last updated : 04/12/2023
For more information on using InlineScript, see [Running Windows PowerShell Comm
One advantage of Windows PowerShell Workflows is the ability to perform a set of commands in parallel instead of sequentially as with a typical script.
-You can use the `Parallel` keyword to create a script block with multiple commands that run concurrently. This uses the following syntax shown below. In this case, Activity1 and Activity2 starts at the same time. Activity3 starts only after both Activity1 and Activity2 have completed.
+You can use the `Parallel` keyword to create a script block with multiple commands that run concurrently. This uses the following syntax shown below. In this case, Activity1 and Activity2 start at the same time. Activity3 starts only after both Activity1 and Activity2 have completed.
```powershell Parallel
workflow CreateTestVms
``` > [!NOTE]
-> For non-graphical PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). You can use these cmdlets or you can [update your modules](automation-update-azure-modules.md) in your Automation account to the latest versions. You might need to update your modules even if you have just created a new Automation account. Use of these cmdlets is not required if you are authenticating using a Run As account configured with a service principal.
+> For non-graphical PowerShell runbooks, `Add-AzAccount` and `Add-AzureRMAccount` are aliases for [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). You can use these cmdlets or you can [update your modules](automation-update-azure-modules.md) in your Automation account to the latest versions. You might need to update your modules even if you have just created a new Automation account.
For more information about checkpoints, see [Adding Checkpoints to a Script Workflow](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj574114(v=ws.11)).
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 03/29/2023 Last updated : 04/04/2023
The following are the current limitations and known issues with PowerShell runbo
* PowerShell runbooks can't retrieve a variable asset with `*~*` in the name. * A [Get-Process](/powershell/module/microsoft.powershell.management/get-process) operation in a loop in a PowerShell runbook can crash after about 80 iterations. * A PowerShell runbook can fail if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed to work with large objects. For example, instead of using `Get-Process` with no limitations, you can have the cmdlet output just the required parameters as in `Get-Process | Select ProcessName, CPU`.
+* When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you may experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-5.1) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-5.1) modules as well.
+* When you use [New-item cmdlet](https://learn.microsoft.com/powershell/module/microsoft.powershell.management/new-item?view=powershell-5.1), jobs might be suspended. To resolve the issue, follow the mitigation steps:
+ 1. Consume the output of `new-item` cmdlet in a variable and **do not** write it to the output stream using `write-output` command.
+ - You can use debug or progress stream after you enable it from **Logging and Tracing** setting of the runbook.
+ ```powershell-interactive
+ $item = New-Item -Path ".\message.txt" -Force -ErrorAction SilentlyContinue
+ write-debug $item # or use write-progress $item
+ ```
+ - Alternatively, you can check if variable is nonempty if required to do so in the script.
+ ```powershell-interactive
+ $item = New-Item -Path ".\message.txt" -Force -ErrorAction SilentlyContinue
+ if($item) { write-output "File Created" }
+ ```
+ 1. You can also upgrade your runbooks to PowerShell 7.1 or PowerShell 7.2 where the same runbook will work as expected.
# [PowerShell 7.1 (preview)](#tab/lps71)
The following are the current limitations and known issues with PowerShell runbo
- You might encounter formatting problems with error output streams for the job running in PowerShell 7 runtime. - When you import a PowerShell 7.1 module that's dependent on other modules, you may find that the import button is gray even when PowerShell 7.1 version of the dependent module is installed. For example, Az.Compute version 4.20.0, has a dependency on Az.Accounts being >= 2.6.0. This issue occurs when an equivalent dependent module in PowerShell 5.1 doesn't meet the version requirements. For example, 5.1 version of Az.Accounts were < 2.6.0. - When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON.
+- We recommend that you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures.
# [PowerShell 7.2 (preview)](#tab/lps72)
The following are the current limitations and known issues with PowerShell runbo
$ProgressPreference = "Continue" ```
+- When you use [ExchangeOnlineManagement](https://learn.microsoft.com/powershell/exchange/exchange-online-powershell?view=exchange-ps) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](https://learn.microsoft.com/powershell/module/powershellget/?view=powershell-7.3) and [PackageManagement](https://learn.microsoft.com/powershell/module/packagemanagement/?view=powershell-7.3) modules.
## PowerShell Workflow runbooks
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
description: This article provides an overview of Azure Automation account authe
keywords: automation security, secure automation; automation authentication Previously updated : 03/07/2023 Last updated : 04/12/2023
A managed identity from Azure Active Directory (Azure AD) allows your runbook to
Managed identities are the recommended way to authenticate in your runbooks, and is the default authentication method for your Automation account.
-> [!NOTE]
-> When you create an Automation account, the option to create a Run As account is no longer available. However, we continue to support a RunAs account for existing and new Automation accounts. You can [create a Run As account](create-run-as-account.md) in your Automation account from the Azure portal or by using PowerShell.
- Here are some of the benefits of using managed identities: - Using a managed identity instead of the Automation Run As account simplifies management. You don't have to renew the certificate used by a Run As account.
Run As accounts in Azure Automation provide authentication for managing Azure Re
- Azure Run As Account - Azure Classic Run As Account
-To create or renew a Run As account, permissions are needed at three levels:
+To renew a Run As account, permissions are needed at three levels:
- Subscription, - Azure Active Directory (Azure AD), and
You need the `Microsoft.Authorization/*/Write` permission. This permission is ob
- [Owner](../role-based-access-control/built-in-roles.md#owner) - [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator)
-To configure or renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
+To renew Classic Run As accounts, you must have the Co-administrator role at the subscription level. To learn more about classic subscription permissions, see [Azure classic subscription administrators](../role-based-access-control/classic-administrators.md#add-a-co-administrator).
### Azure AD permissions
-To be able to create or renew the service principal, you need to be a member of one of the following Azure AD built-in roles:
+To renew the service principal, you need to be a member of one of the following Azure AD built-in roles:
- [Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator) - [Application Developer](../active-directory/roles/permissions-reference.md#application-developer)
Membership can be assigned to **ALL** users in the tenant at the directory level
### Automation account permissions
-To be able to create or update the Automation account, you need to be a member of one of the following Automation account roles:
+To update the Automation account, you need to be a member of one of the following Automation account roles:
- [Owner](./automation-role-based-access-control.md#owner) - [Contributor](./automation-role-based-access-control.md#contributor)
To learn more about the Azure Resource Manager and Classic deployment models, se
>[!NOTE] >Azure Cloud Solution Provider (CSP) subscriptions support only the Azure Resource Manager model. Non-Azure Resource Manager services are not available in the program. When you are using a CSP subscription, the Azure Classic Run As account is not created, but the Azure Run As account is created. To learn more about CSP subscriptions, see [Available services in CSP subscriptions](/azure/cloud-solution-provider/overview/azure-csp-available-services).
-When you create an Automation account, the Run As account is created by default at the same time with a self-signed certificate. If you chose not to create it along with the Automation account, it can be created individually at a later time. An Azure Classic Run As Account is optional, and is created separately if you need to manage classic resources.
-
-> [!NOTE]
-> Azure Automation does not automatically create the Run As account. It has been replaced by using managed identities.
-
-If you want to use a certificate issued by your enterprise or third-party certification authority (CA) instead of the default self-signed certificate, can use the [PowerShell script to create a Run As account](create-run-as-account.md#powershell-script-to-create-a-run-as-account) option for your Run As and Classic Run As accounts.
- > [!VIDEO https://www.microsoft.com/videoplayer/embed/RWwtF3] ### Run As account
-When you create a Run As account, it performs the following tasks:
-
-* Creates an Azure AD application with a self-signed certificate, creates a service principal account for the application in Azure AD, and assigns the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md).
-
-* Creates an Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses.
-
-* Creates an Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint.
+Run As Account consists of the following components:
+- An Azure AD application with a self-signed certificate, and a service principal account for the application in Azure AD, which is assigned the [Contributor](../role-based-access-control/built-in-roles.md#contributor) role for the account in your current subscription. You can change the certificate setting to [Reader](../role-based-access-control/built-in-roles.md#reader) or any other role. For more information, see [Role-based access control in Azure Automation](automation-role-based-access-control.md).
+- An Automation certificate asset named `AzureRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key that the Azure AD application uses.
+- An Automation connection asset named `AzureRunAsConnection` in the specified Automation account. The connection asset holds the application ID, tenant ID, subscription ID, and certificate thumbprint.
### Azure Classic Run As account
-> [!IMPORTANT]
-> Azure Automation Run As Account will retire on September 30, 2023 and will be replaced with Managed Identities. Before that date, you'll need to start migrating your runbooks to use [managed identities](automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](https://learn.microsoft.com/azure/automation/migrate-run-as-accounts-managed-identity?tabs=run-as-account#sample-scripts) to start migrating the runbooks from Run As account to managed identities before 30 September 2023.
-
-When you create an Azure Classic Run As account, it performs the following tasks:
+Azure Classic Run As Account consists of the following components:
+- A management certificate in the subscription.
+- An Automation certificate asset named `AzureClassicRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate.
+- An Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name.
> [!NOTE]
-> You must be a co-administrator on the subscription to create or renew this type of Run As account.
-
-* Creates a management certificate in the subscription.
-
-* Creates an Automation certificate asset named `AzureClassicRunAsCertificate` in the specified Automation account. The certificate asset holds the certificate private key used by the management certificate.
-
-* Creates an Automation connection asset named `AzureClassicRunAsConnection` in the specified Automation account. The connection asset holds the subscription name, subscription ID, and certificate asset name.
+> You must be a co-administrator on the subscription to renew this type of Run As account.
## Service principal for Run As account
automation Delete Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-run-as-account.md
Title: Delete an Azure Automation Run As account
description: This article tells how to delete a Run As account with PowerShell or from the Azure portal. Previously updated : 01/06/2021 Last updated : 04/12/2023
Run As accounts in Azure Automation provide authentication for managing resource
![Delete Run As account](media/delete-run-as-account/automation-account-delete-run-as.png)
-5. While the account is being deleted, you can track the progress under **Notifications** from the menu.
+5. While the account is being deleted, you can track the progress under **Notifications** from the menu. Run As accounts can't be restored after deletion.
## Next steps
-To recreate your Run As or Classic Run As account, see [Create Run As accounts](create-run-as-account.md).
+- [Use system-assigned managed identity](enable-managed-identity-for-automation.md).
+- [Use user-assigned managed identity](add-user-assigned-identity.md).
automation Manage Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-run-as-account.md
Title: Manage an Azure Automation Run As account description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 08/02/2021 Last updated : 04/12/2023
You can allow Azure Automation to verify if Key Vault and your Run As account se
You can use the [Extend-AutomationRunAsAccountRoleAssignmentToKeyVault.ps1](https://aka.ms/AA5hugb) script in the PowerShell Gallery to grant your Run As account permissions to Key Vault. See [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-powershell.md) for more details on setting permissions on Key Vault.
-## Resolve misconfiguration issues for Run As accounts
-
-Some configuration items necessary for a Run As or Classic Run As account might have been deleted or created improperly during initial setup. Possible instances of misconfiguration include:
-
-* Certificate asset
-* Connection asset
-* Run As account removed from the Contributor role
-* Service principal or application in Azure AD
-
-For such misconfiguration instances, the Automation account detects the changes and displays a status of *Incomplete* on the Run As Accounts properties pane for the account.
--
-When you select the Run As account, the account properties pane displays the following error message:
-
-```text
-The Run As account is incomplete. Either one of these was deleted or not created - Azure Active Directory Application, Service Principal, Role, Automation Certificate asset, Automation Connect asset - or the Thumbprint is not identical between Certificate and Connection. Please delete and then re-create the Run As Account.
-```
-
-You can quickly resolve these Run As account issues by [deleting](delete-run-as-account.md) and [re-creating](create-run-as-account.md) the Run As account.
## Next steps * [Application Objects and Service Principal Objects](../active-directory/develop/app-objects-and-service-principals.md). * [Certificates overview for Azure Cloud Services](../cloud-services/cloud-services-certs-create.md).
-* To create or re-create a Run As account, see [Create a Run As account](create-run-as-account.md).
* If you no longer need to use a Run As account, see [Delete a Run As account](delete-run-as-account.md).
automation Quickstart Create Automation Account Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-create-automation-account-template.md
Title: Create an Azure Automation account using a Resource Manager template
description: This article shows how to create an Automation account by using the Azure Resource Manager template. Previously updated : 08/27/2021 Last updated : 04/12/2023
The sample template does the following steps:
* Links the Automation account to the Log Analytics workspace. * Adds sample Automation runbooks to the account.
-> [!NOTE]
-> Creation of the Automation Run As account is not supported when you're using an ARM template. To create a Run As account manually from the portal or with PowerShell, see [Create Run As account](create-run-as-account.md).
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
automation Create Azure Automation Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md
Title: Quickstart - Create an Azure Automation account using the portal description: This quickstart helps you to create a new Automation account using Azure portal. Previously updated : 10/26/2021 Last updated : 04/12/2023
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
description: This article helps you get started configuring an Azure VM with Des
keywords: dsc, configuration, automation Previously updated : 09/01/2021 Last updated : 04/12/2023
By enabling Azure Automation State Configuration, you can manage and monitor the
To complete this quickstart, you need: * An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).
-* An Azure Automation account. For instructions on creating an Azure Automation Run As account, see [Azure Run As Account](../manage-runas-account.md).
* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, CentOS, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md) ## Sign in to Azure
There are many different methods to enable a machine for Automation State Config
1. From the left pane of the Automation account, select **State configuration (DSC)**. 2. Click **Add** to open the **VM select** page. 3. Find the virtual machine for which to enable DSC. You can use the search field and filter options to find a specific virtual machine.
-4. Click on the virtual machine, and then click **Connect**
+4. Click on the virtual machine, and then click **Connect**.
5. Select the DSC settings appropriate for the virtual machine. If you have already prepared a configuration, you can specify it as `Node Configuration Name`. You can set the [configuration mode](/powershell/dsc/managing-nodes/metaConfig) to control the configuration behavior for the machine. 6. Click **OK**. While the DSC extension is deployed to the virtual machine, the status reported is `Connecting`.
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Title: Use source control integration in Azure Automation
description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 11/22/2021 Last updated : 04/12/2023
Azure Automation supports three types of source control:
> > :::image type="content" source="./media/source-control-integration/user-assigned-managed-identity.png" alt-text="Screenshot that displays the user-assigned Managed Identity."::: >
-> If you have both a Run As account and managed identity enabled, then managed identity is given preference. If you want to use a Run As account instead, you can [create an Automation variable](./shared-resources/variables.md) of BOOLEAN type named `AUTOMATION_SC_USE_RUNAS` with a value of `true`.
+> If you have both a Run As account and managed identity enabled, then managed identity is given preference.
+
+> [!Important]
+> Azure Automation Run As Account will retire on **September 30, 2023** and will be replaced with Managed Identities. Before that date, you need to [migrate from a Run As account to Managed identities](migrate-run-as-accounts-managed-identity.md).
> [!NOTE] > According to [this](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops#application-connection-policies) Azure DevOps documentation, **Third-party application access via OAuth** policy is defaulted to **off** for all new organizations. So if you try to configure source control in Azure Automation with **Azure Devops (Git)** as source control type without enabling **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps then you might get **SourceControl securityToken is invalid** error. Hence to avoid this error, make sure you first enable **Third-party application access via OAuth** under Policies tile of Organization Settings in Azure DevOps.
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Edit the data controller configuration as needed:
**OPTIONAL** - **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file.-- **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.-- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.-- **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate. - **metricsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
If you encounter any troubles with creation, please see the [troubleshooting gui
- [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md) - [Create a PostgreSQL server using Kubernetes-native tools](./create-postgresql-server-kubernetes-native-tools.md)+
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
Previously updated : 09/07/2022 Last updated : 04/14/2022 #Customer intent: As a data professional, I want to validate upcoming releases.
Normally, pre-release version binaries are available around 10:00 AM Pacific Tim
Pre-release versions simultaneously release with artifacts, which are designed to work together: - Container images hosted on the Microsoft Container Registry (MCR)
- - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds
- `mcr.microsoft.com/arcdata/test` is the repository that hosts the **test** pre-release builds
+ - `mcr.microsoft.com/arcdata/preview` is the repository that hosts the **preview** pre-release builds
> [!NOTE] > `mcr.microsoft.com/arcdata/` will continue to be the repository that hosts the final release builds.
To install a pre-release version, follow these pre-requisite instructions:
If you use the Azure CLI extension: -- Uninstall the Azure CLI extension (`az extension remove -n arcdata`).-- Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#Current preview release information)-- Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`).
+1. Uninstall the Azure CLI extension (`az extension remove -n arcdata`).
+1. Download the latest pre-release Azure CLI extension `.whl` file from the link in the [Current preview release information](#current-preview-release-information).
+1. Install the latest pre-release Azure CLI extension (`az extension add -s <location of downloaded .whl file>`).
If you use the Azure Data Studio extension to install: -- Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.-- Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#Current preview release information)-- Install the extensions by choosing File -> Install Extension from VSIX package and then browsing to the download location of the .vsix files. Install the `azcli` extension first and then `arc`.
+1. Uninstall the Azure Data Studio extension. Select the Extensions panel and select on the **Azure Arc** extension, select **Uninstall**.
+1. Download the latest pre-release Azure Data Studio extension .vsix files from the links in the [Current preview release information](#current-preview-release-information).
+1. Install the extensions. Choose **File** > **Install Extension from VSIX package**. Locate the download location of the .vsix files. Install the `azcli` extension first and then `arc`.
### Install using Azure CLI
-> [!NOTE]
-> Deploying pre-release builds using direct connectivity mode from Azure CLI is not supported.
+To install with the Azure CLI, follow the steps for your connectivity mode:
+
+- [Indirect connectivity mode](#indirect-connectivity-mode)
+- [Direct connectivity mode](#direct-connectivity-mode)
#### Indirect connectivity mode
-If you install using the Azure CLI:
+1. Set environment variables. Set variables for:
+ - Docker registry
+ - Docker repository
+ - Docker image tag
+ - Docker image policy
-1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md).
-1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page.
+ Use the example script below to set environment variables for your respective platform.
+
+ # [Linux](#tab/linux)
- For example:
+ ```console
+ ## variables for the docker registry, repository, and image
+ export DOCKER_REGISTRY=<Docker registry>
+ export DOCKER_REPOSITORY=<Docker repository>
+ export DOCKER_IMAGE_TAG=<Docker image tag>
+ export DOCKER_IMAGE_POLICY=<Docker image policy>
+ ```
- ```json
+ # [Windows (PowerShell)](#tab/windows)
- "docker": {
- "registry": "mcr.microsoft.com",
- "repository": "arcdata/test",
- "imageTag": "v1.8.0_2022-06-07_5ba6b837",
- "imagePullPolicy": "Always"
- },
+ ```PowerShell
+ ## variables for Metrics and Monitoring dashboard credentials
+ $ENV:DOCKER_REGISTRY="<Docker registry>"
+ $ENV:DOCKER_REPOSITORY="<Docker repository>"
+ $ENV:DOCKER_IMAGE_TAG="<Docker image tag>"
+ $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>"
```
+
+1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md).
1. Use the command `az arcdata dc create` as explained in [create a custom configuration profile](create-custom-configuration-template.md). #### Direct connectivity mode If you install using the Azure CLI:
-1. Follow the instructions to [create a custom configuration profile](create-custom-configuration-template.md).
-1. Edit this custom configuration profile file. Enter the `docker` property values as required based on the information provided in the version history table on this page.
-
- For example:
+1. Set environment variables. Set variables for:
+ - Docker registry
+ - Docker repository
+ - Docker image tag
+ - Docker image policy
+ - Arc data services extension version tag (`ARC_DATASERVICES_EXTENSION_VERSION_TAG`): Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information).
+ - Arc data services release train: `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `{ test | preview }`.
- ```json
-
- "docker": {
- "registry": "mcr.microsoft.com",
- "repository": "arcdata/test",
- "imageTag": "v1.8.0_2022-06-07_5ba6b837",
- "imagePullPolicy": "Always"
- },
- ```
-1. Set environment variables for:
+ Use the example script below to set environment variables for your respective platform.
- - `ARC_DATASERVICES_EXTENSION_VERSION_TAG`: Use the version of the **Arc enabled Kubernetes helm chart extension version** from the release details under [Current preview release information](#current-preview-release-information).
- - `ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN`: `preview`
-
- For example, the following command sets the environment variables on Linux.
+ # [Linux](#tab/linux)
```console
- export ARC_DATASERVICES_EXTENSION_VERSION_TAG='1.2.20031002'
+ ## variables for the docker registry, repository, and image
+ export DOCKER_REGISTRY=<Docker registry>
+ export DOCKER_REPOSITORY=<Docker repository>
+ export DOCKER_IMAGE_TAG=<Docker image tag>
+ export DOCKER_IMAGE_POLICY=<Docker image policy>
+ export ARC_DATASERVICES_EXTENSION_VERSION_TAG=<Version tag>
export ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN='preview' ```
- The following command sets the environment variables on PowerShell
+ # [Windows (PowerShell)](#tab/windows)
- ```console
- $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="1.2.20031002"
+ ```PowerShell
+ ## variables for Metrics and Monitoring dashboard credentials
+ $ENV:DOCKER_REGISTRY="<Docker registry>"
+ $ENV:DOCKER_REPOSITORY="<Docker repository>"
+ $ENV:DOCKER_IMAGE_TAG="<Docker image tag>"
+ $ENV:DOCKER_IMAGE_POLICY="<Docker image policy>"
+ $ENV:ARC_DATASERVICES_EXTENSION_VERSION_TAG="<Version tag>"
$ENV:ARC_DATASERVICES_EXTENSION_RELEASE_TRAIN="preview" ```
+
1. Run `az arcdata dc create` as normal for the direct mode to:
If you install using the Azure CLI:
> [!NOTE] > Deploying pre-release builds using direct connectivity mode from Azure Data Studio is not supported.
-#### Indirect connectivity mode
+You can install with Azure Data Studio (ADS) in indirect connectivity mode. To use Azure Data Studio to install:
-If you use Azure Data Studio to install, complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**. In the generated notebook, edit the `Set variables` cell to *add* the following lines:
+1. Complete the data controller deployment wizard as normal except click on **Script to notebook** at the end instead of **Deploy**.
+1. Update the following script. Replace `{ test | preview }` with the appropriate label.
+1. In the generated notebook, edit the `Set variables` cell to *add* the following lines:
-```python
-# choose between arcdata/test or arcdata/preview as appropriate
-os.environ["AZDATA_DOCKER_REPOSITORY"] = "arcdata/test"
-os.environ["AZDATA_DOCKER_TAG"] = "v1.8.0_2022-06-07_5ba6b837"
-```
+ ```python
+ # choose between arcdata/test or arcdata/preview as appropriate
+ os.environ["AZDATA_DOCKER_REPOSITORY"] = "{ test | preview }"
+ os.environ["AZDATA_DOCKER_TAG"] = "{ Current preview tag }
+ ```
-Run the notebook by clicking **Run All**.
+1. Run the notebook, click **Run All**.
### Install using Azure portal
-Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
+1. Follow the instructions to [Arc-enabled the Kubernetes cluster](create-data-controller-direct-prerequisites.md) as normal.
+1. Open the Azure portal for the appropriate preview version:
-Open the Azure portal by using this special URL: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home).
+ - **Test**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=test#home)
+ - **Preview**: [https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home](https://portal.azure.com/?feature.canmodifystamps=true&Microsoft_Azure_HybridData_Platform=preview#home).
-Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate and enter the desired tag in the **Image tag** field. Fill out the rest of the custom cluster configuration template fields as normal.
+1. Follow the instructions to [Create the Azure Arc data controller from Azure portal - Direct connectivity mode](create-data-controller-direct-azure-portal.md) except that when choosing a deployment profile, select **Custom template** in the **Kubernetes configuration template** drop-down.
+1. Set the repository to either `arcdata/test` or `arcdata/preview` as appropriate. Enter the desired tag in the **Image tag** field.
+1. Fill out the rest of the custom cluster configuration template fields as normal.
Complete the rest of the wizard as normal.
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
New for this release:
- Azure Arc-enabled SQL Managed Instance - Direct mode for failover groups is generally available az CLI
+ - Schedule the HA orchestrator replicas on different nodes when available
- Arc PostgreSQL - Ensure postgres extensions work per database/role - Arc PostgreSQL | Upload metrics/logs to Azure Monitor
+- Bug fixes and optimizations in the following areas:
+ - Deploying Arc data controller using the individual create experience has been removed as it sets the auto upgrade parameter incorrectly. Use the all-in-one create experience. This experience creates the extension, custom location, and data controller. It also sets all the parameters correctly. For specific information, see [Create Azure Arc data controller in direct connectivity mode using CLI](create-data-controller-direct-cli.md).
+ ## March 14, 2023 ### Image tag
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
Title: (Preview) SSH access to Azure Arc-enabled servers description: Leverage SSH remoting to access and manage Azure Arc-enabled servers. Previously updated : 03/25/2022 Last updated : 04/12/2023
Authenticating with Azure AD credentials has additional requirements:
> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#limits) per subscription. ### Availability
-SSH access to Arc-enabled servers is currently supported in the following regions:
-- eastus2euap, eastus, eastus2, westus2, southeastasia, westeurope, northeurope, westcentralus, southcentralus, uksouth, australiaeast, francecentral, japaneast, eastasia, koreacentral, westus3, westus, centralus, northcentralus.-
-### Supported operating systems
- - CentOS: CentOS 7, CentOS 8
- - RedHat Enterprise Linux (RHEL): RHEL 7.4 to RHEL 7.10, RHEL 8.3+
- - SUSE Linux Enterprise Server (SLES): SLES 12, SLES 15.1+
- - Ubuntu Server: Ubuntu Server 16.04 to Ubuntu Server 20.04
+SSH access to Arc-enabled servers is currently supported in all regions supported by Arc-Enabled Servers with the following exceptions:
+ - Germany West Central
## Getting started
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Azure Cache for Redis is continually expanding into new regions. To check the av
| | - | -- | | **Subscription** | Drop down and select your subscription. | The subscription under which to create this new Azure Cache for Redis instance. | | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your cache and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. |
- | **DNS name** | Enter a name that is unique in the region. | The cache name must be a string between 1 and 63 characters that contain only numbers, letters, or hyphens. The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* is *\<DNS name\>.\<Azure region\>.redisenterprise.cache.azure.net*. |
+ | **DNS name** | Enter a name that is unique in the region. | The cache name must be a string between 1 and 63 characters when _combined with the cache's region name_ that contain only numbers, letters, or hyphens. (If the cache name is less than 45 characters long it should work in all currently available regions.) The name must start and end with a number or letter, and can't contain consecutive hyphens. Your cache instance's *host name* is *\<DNS name\>.\<Azure region\>.redisenterprise.cache.azure.net*. |
| **Location** | Drop down and select a location. | Enterprise tiers are available in selected Azure regions. | | **Cache type** | Drop down and select an *Enterprise* or *Enterprise Flash* tier and a size. | The tier determines the size, performance, and features that are available for the cache. |
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. | |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Select `Add to workspace`.|
+ |**Select how you would like to open your project**|Select `Open in current window`.|
1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files).
The next article depends on your chosen process model.
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
# [Isolated process](#tab/isolated-process)
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
| **Select the build tool for Java project** | Choose `Maven`. | |**Provide a function name**| Enter `HttpExample`.| |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**| Choose `Add to workspace`.|
+ |**Select how you would like to open your project**| Choose `Open in current window`.|
1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=java#generated-project-files).
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ |**Select how you would like to open your project**|Choose `Open in current window`.|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=javascript#generated-project-files). ::: zone-end
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a JavaScript programming model**|Choose `Model V4 (Preview)`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.|
- |**Select how you would like to open your project**|Choose `Add to workspace`|
+ |**Select how you would like to open your project**|Choose `Open in current window`|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions JavaScript developer guide](functions-reference-node.md). ::: zone-end
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript)
> [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-javascript)
[Azure Functions Core Tools]: functions-run-local.md [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md
In this section, you use Visual Studio Code to create a local Azure Functions cu
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ |**Select how you would like to open your project**|Choose `Open in current window`.|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer.
azure-functions Create First Function Vs Code Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ |**Select how you would like to open your project**|Choose `Open in current window`.|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=powershell#generated-project-files).
azure-functions Create First Function Vs Code Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a template for your project's first function**| Choose `HTTP trigger`.| |**Provide a function name**| Enter `HttpExample`.| |**Authorization level**| Choose `Anonymous`, which lets anyone call your function endpoint. For more information about the authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**| Choose `Add to workspace`.|
+ |**Select how you would like to open your project**| Choose `Open in current window`.|
4. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=python#generated-project-files). ::: zone-end
In this section, you use Visual Studio Code to create a local Azure Functions pr
|--|--| |**Select a language**| Choose `Python (Programming Model V2)`.| |**Select a Python interpreter to create a virtual environment**| Choose your preferred Python interpreter. If an option isn't shown, type in the full path to your Python binary.|
- |**Select how you would like to open your project**| Choose `Add to workspace`.|
+ |**Select how you would like to open your project**| Choose `Open in current window`.|
4. Visual Studio Code uses the provided information and generates an Azure Functions project.
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=python) to
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-python)
+> [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-python)
[Having issues? Let us know.](https://aka.ms/python-functions-qs-survey)
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
+ |**Select how you would like to open your project**|Choose `Open in current window`.|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=typescript#generated-project-files). ::: zone-end
In this section, you use Visual Studio Code to create a local Azure Functions pr
|**Select a TypeScript programming model**|Choose `Model V4 (Preview)`| |**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.|
- |**Select how you would like to open your project**|Choose `Add to workspace`|
+ |**Select how you would like to open your project**|Choose `Open in current window`|
Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Azure Functions TypeScript developer guide](functions-reference-node.md). ::: zone-end
azure-functions Durable Functions Best Practice Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-best-practice-reference.md
+
+ Title: Durable Functions best practices and diagnostic tools
+description: Learn about the best practices when using Durable Functions and the various tools available for diagnosing problems.
++ Last updated : 02/15/2023+++
+# Durable Functions best practices and diagnostic tools
+
+This article details some best practices when using Durable Functions. It also describes various tools to help diagnose problems during development, testing, and production use.
+
+## Best practices
+
+### Use the latest version of the Durable Functions extension and SDK
+
+There are two components that a function app uses to execute Durable Functions. One is the *Durable Functions SDK* that allows you to write orchestrator, activity, and entity functions using your target programming language. The other is the *Durable extension*, which is the runtime component that actually executes the code. With the exception of .NET in-process apps, the SDK and the extension are versioned independently.
+
+Staying up to date with the latest extension and SDK ensures your application benefits from the latest performance improvements, features, and bug fixes. Upgrading to the latest versions also ensures that Microsoft can collect the latest diagnostic telemetry to help accelerate the investigation process when you open a support case with Azure.
+
+* See [Upgrade durable functions extension version](durable-functions-extension-upgrade.md) for instructions on getting the latest extension version.
+* To ensure you're using the latest version of the SDK, check the package manager of the language you're using.
+
+### Adhere to Durable Functions [code constraints](durable-functions-code-constraints.md)
+
+The [replay](durable-functions-orchestrations.md#reliability) behavior of orchestrator code creates constraints on the type of code that you can write in an orchestrator function. An example of a constraint is that your orchestrator function must use deterministic APIs so that each time itΓÇÖs replayed, it produces the same result.
+
+> [!NOTE]
+> The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific code constraints. See [Durable Functions Roslyn Analyzer](durable-functions-roslyn-analyzer.md) for instructions on how to enable it on Visual Studio and Visual Studio Code.
+
+### Familiarize yourself with your programming language's Azure Functions performance settings
+
+_Using default settings_, the language runtime you select may impose strict concurrency restrictions on your functions. For example: only allowing 1 function to execute at a time on a given VM. These restrictions can usually be relaxed by _fine tuning_ the concurrency and performance settings of your language. If you're looking to optimize the performance of your Durable Functions application, you will need to familiarize yourself with these settings.
+
+Below is a non-exhaustive list of some of the languages that often benefit from fine tuning their performance and concurrency settings, and their guidelines for doing so.
+
+* [JavaScript](../functions-reference-node.md#scaling-and-concurrency)
+* [PowerShell](../functions-reference-powershell.md#concurrency)
+* [Python](../python-scale-performance-reference.md)
+
+### Guarantee unique Task Hub names per app
+
+Multiple Durable Function apps can share the same storage account. By default, the name of the app is used as the task hub name, which ensures that accidental sharing of task hubs won't happen. If you need to explicitly configure task hub names for your apps in host.json, you must ensure that the names are [*unique*](durable-functions-task-hubs.md#multiple-function-apps). Otherwise, the multiple apps will compete for messages, which could result in undefined behavior, including orchestrations getting unexpectedly "stuck" in the Pending or Running state.
+
+The only exception is if you deploy *copies* of the same app in [multiple regions](durable-functions-disaster-recovery-geo-distribution.md); in this case, you can use the same task hub for the copies.
+
+### Follow guidance when deploying code changes to running orchestrators
+
+It's inevitable that functions will be added, removed, and changed over the lifetime of an application. Examples of [common breaking changes](durable-functions-versioning.md) include changing activity or entity function signatures and changing orchestrator logic. These changes are a problem when they affect orchestrations that are still running. If deployed incorrectly, code changes could lead to orchestrations failing with a non-deterministic error, getting stuck indefinitely, performance degradation, etc. Refer to recommended [mitigation strategies](durable-functions-versioning.md#mitigation-strategies) when making code changes that may impact running orchestrations.
+
+### Keep function inputs and outputs as small as possible
+
+You can run into memory issues if you provide large inputs and outputs to and from Durable Functions APIs.
+
+Inputs and outputs to Durable Functions APIs are serialized into the orchestration history. This means that large inputs and outputs can, over time, greatly contribute to an orchestrator history growing unbounded, which risks causing memory exceptions during [replay](durable-functions-orchestrations.md#reliability).
+
+To mitigate the impact of large inputs and outputs to APIs, you may choose to delegate some work to sub-orchestrators. This helps load balance the history memory burden from a single orchestrator to multiple ones, therefore keeping the memory footprint of individual histories small.
+
+That said the best practice for dealing with _large_ data is to keep it in external storage and to only materialize that data inside Activities, when needed. When taking this approach, instead of communicating the data itself as inputs and/or outputs of Durable Functions APIs, you can pass in some lightweight identifier that allows you to retrieve that data from external storage when needed in your Activities.
+
+### Fine tune your Durable Functions concurrency settings
+
+A single worker instance can execute multiple work items concurrently to increase efficiency. However, processing too many work items concurrently risks exhausting resources like CPU capacity, network connections, etc. In many cases, this shouldnΓÇÖt be a concern because scaling and limiting work items are handled automatically for you. That said, if youΓÇÖre experiencing performance issues (such as orchestrators taking too long to finish, are stuck in pending, etc.) or are doing performance testing, you could [configure concurrency limits](durable-functions-perf-and-scale.md#configuration-of-throttles) in the host.json file.
+
+> [!NOTE]
+> This is not a replacement for fine-tuning the performance and concurrency settings of your language runtime in Azure Functions. The Durable Functions concurrency settings only determine how much work can be assigned to a given VM at a time, but it does not determine the degree of parallelism in processing that work inside the VM. The latter requires fine-tuning the language runtime performance settings.
+
+
+## Diagnostic tools
+
+There are several tools available to help you diagnose problems.
+
+### Durable Functions and Durable Task Framework Logs
+
+#### Durable Functions Extension
+The Durable extension emits tracking events that allow you to trace the end-to-end execution of an orchestration. These tracking events can be found and queried using the [Application Insights Analytics](../../azure-monitor/logs/log-query-overview.md) tool in the Azure portal. The verbosity of tracking data emitted can be configured in the `logger` (Functions 1.x) or `logging` (Functions 2.0) section of the host.json file. See [configuration details](durable-functions-diagnostics.md#functions-10).
+
+#### Durable Task Framework
+Starting in v2.3.0 of the Durable extension, logs emitted by the underlying Durable Task Framework (DTFx) are also available for collection. See [details on how to enable these logs](durable-functions-diagnostics.md#durable-task-framework-logging).
+
+### Azure portal
+
+#### Diagnose and solve problems
+Azure Function App Diagnostics is a useful resource on Azure portal for monitoring and diagnosing potential issues in your application. It also provides suggestions to help resolve problems based on the diagnosis. See [Azure Function App Diagnostics](function-app-diagnostics.md).
+
+#### Durable Functions Orchestration traces
+Azure portal provides orchestration trace details to help you understand the status of each orchestration instance and trace the end-to-end execution. When you look at the list of functions inside your Azure Functions app, you'll see a **Monitor** column that contains links to the traces. You need to have Applications Insights enabled for your app to get this information.
+
+### Durable Functions Monitor Extension
+
+This is a [Visual Studio Code extension](https://github.com/microsoft/DurableFunctionsMonitor) that provides a UI for monitoring, managing, and debugging your orchestration instances.
+
+### Roslyn Analyzer
+
+The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific [code constraints](durable-functions-code-constraints.md). See [Durable Functions Roslyn Analyzer](durable-functions-roslyn-analyzer.md) for instructions on how to enable it on Visual Studio and Visual Studio Code.
++
+## Support
+
+For questions and support, you may open an issue in one of the GitHub repos below. When reporting a bug in Azure, including information such as affected instance IDs, time ranges in UTC showing the problem, the application name (if possible) and deployment region will greatly speed up investigations.
+- [Durable Functions extension and .NET in-process SDK](https://github.com/Azure/azure-functions-durable-extension/issues)
+- [.NET isolated SDK](https://github.com/microsoft/durabletask-dotnet/issues)
+- [Durable Functions for Java](https://github.com/microsoft/durabletask-java/issues)
+- [Durable Functions for JavaScript](https://github.com/Azure/azure-functions-durable-js/issues)
+- [Durable Functions for Python](https://github.com/Azure/azure-functions-durable-python/issues)
azure-functions Durable Functions Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-extension-upgrade.md
+
+ Title: Upgrade Durable Functions extension version
+description: Learn why it's important to use the latest version of the Durable Functions extension and how to upgrade to the latest.
++ Last updated : 02/15/2023+++
+# Upgrade Durable Functions extension version
++
+Many issues users experience with Durable Functions can be resolved simply by upgrading to the latest version of the extension, which often contains important bug fixes and performance improvements. You can follow the instructions in this article to get the latest version of the Durable Functions extension.
+
+Changes to the extension can be found in the [Release page](https://github.com/Azure/azure-functions-durable-extension/releases) of the `Azure/azure-functions-durable-extension` repo. You can also configure to receive notifications whenever there's a new extension release by going to the **Releases page**, clicking on **Watch**, then on **Custom**, and finally selecting the **Releases** filter:
+++
+## Reference the latest NuGet packages (.NET apps only)
+.NET apps can get the latest version of the Durable Functions extension by referencing the latest NuGet package:
+
+* [.NET in-process worker](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask)
+* [.NET isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask)
+
+If you're using the Netherite or MSSQL [storage providers](durable-functions-storage-providers.md) (instead of Azure Storage), you need to reference one of the following:
+
+* [Netherite, in-process worker](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions)
+* [Netherite, isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite)
+* [MSSQL, in-process worker](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions)
+* [MSSQL, isolated worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer)
+
+## Upgrade the extension bundle
+[Extension bundles](../functions-bindings-register.md#extension-bundles) provide an easy and convenient way for non-.NET function apps to reference and use various Azure Function triggers and bindings. For example, if you need to send a message to Event Hubs every time your function is triggered, you can use the Event Hubs extension to gain access to Event Hubs bindings. The Durable Functions extension is also included in each version of extension bundles. Extension bundles are automatically configured in host.json when creating a function app using any of the supported development tools.
+
+Most non-.NET applications rely on extension bundles to gain access to various triggers and bindings. The [latest bundle release](https://github.com/Azure/azure-functions-extension-bundles) often contains the latest version of the Durable Functions extension with critical bug fixes and performance improvements. Therefore, it's important that your app uses the latest version of extension bundles. You can check your host.json file to see whether the version range you're using includes the latest extension bundle version.
+
+## Manually upgrade the Durable Functions extension
+If upgrading the extension bundle didn't resolve your problem, and you noticed a newer release of the Durable Functions extension containing a potential fix to your problem, then you could try to manually upgrade the extension itself. Note that this is only intended for advanced scenarios or when time-sensitive fixes are necessary as there are many drawbacks to manually managing extensions. For example, you may have to deal to .NET errors when the extensions you use are incompatible with each other. You also need to manually upgrade extensions to get the latest fixes and patches instead of getting them automatically through the extension bundle.
+
+First, remove the `extensionBundle` section from your host.json file.
+
+Install the `dotnet` CLI if you don't already have it. You can get it from this [page](https://www.microsoft.com/net/download/).
+
+Because applications normally use more than one extension, it's recommended that you run the following to manually install all the latest version of all extensions supported by Extension Bundles:
+
+```console
+func extensions install
+```
+
+However, if you **only** wish to install the latest Durable Functions extension release, you would run the following command:
+
+```console
+func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v <version>
+```
+
+For example:
+
+```console
+func extensions install Microsoft.Azure.WebJobs.Extensions.DurableTask -v 2.9.1
+```
++++
azure-functions Durable Functions Roslyn Analyzer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-roslyn-analyzer.md
+
+ Title: Durable Functions Roslyn Analyzer (C# only)
+description: Learn about how to use the Roslyn Analyzer to help adhere to Durable Functions specific code constraints.
++ Last updated : 02/15/2023+++
+# Durable Functions Rosyln Analyzer (C# only)
+
+The Durable Functions Roslyn Analyzer is a live code analyzer that guides C# users to adhere to Durable Functions specific [code constraints](./durable-functions-code-constraints.md). This analyzer is enabled by default to check your Durable Functions code and generate warnings and errors when there's any. Currently, the analyzer is only supported in the .NET in-process worker.
+
+For more detailed information on the analyzer (improvements, releases, bug fixes, etc.), see its [release notes page](https://github.com/Azure/azure-functions-durable-extension/releases/tag/Analyzer-v0.2.0).
++
+## Configuration
+
+### Visual Studio
+
+For the best experience, you'll want to enable full solution analysis in your Visual Studio settings. This can be done by going to **Tools** -> **Options** -> **Text Editor** -> **C#** -> **Advanced** -> **"Entire solution"**:
++
+Depending on the version of Visual Studio, you may also see "Enable full solution analysis":
++
+To disable the analyzer, refer to these [instructions](/visualstudio/code-quality/in-source-suppression-overview).
+
+### Visual Studio Code
+
+Open **Settings** by clicking the wheel icon on the lower left corner, then search for ΓÇ£rosylnΓÇ¥. ΓÇ£Enable Rosyln AnalyzersΓÇ¥ should show up as one of the results. Check the enable support box.
+
azure-functions Function App Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/function-app-diagnostics.md
+
+ Title: Azure Functions app diagnostics
+description: Learn how to use Azure Functions diagnostic feature on Azure portal to diagnose problems with Durable Functions.
++ Last updated : 02/15/2023+++
+# Azure Functions app diagnostics
+
+Azure Functions App Diagnostics is a useful resource in the Azure portal for monitoring and diagnosing potential issues in your Durable Functions application. Not only does it help diagnose problems, but it also provides potential solutions and/or relevant documentation to help you resolve issues faster.
+
+## How to use Azure Functions app diagnostics
+
+1. Go to your Function App resource. In the left menu, select **Diagnose and solve problems**.
+
+2. Search for ΓÇ£Durable FunctionsΓÇ¥ and select on the result.
+
+ :::image type="content" source="media/durable-functions-best-practice/search-for-detector.png" alt-text="Screenshot showing how to search for Durable Functions detector.":::
+
+3. You're now inside the Durable Functions detector, which checks for common problems Durable Functions apps tend to have. The detector also gives you links to tools and documentation you might find helpful. Go through the various insights in the detector to learn about the applicationΓÇÖs health. Some examples of what the detector tells you include the Durable Functions extension version your app is using, performance issues, and any errors or warnings. If there are issues, you'll see suggestions on how to mitigate and resolve them.
+
+ :::image type="content" source="media/durable-functions-best-practice/durable-functions-detector.png" alt-text="Screenshot of Durable Functions detector.":::
+
+## Other useful detectors
+On the left side of the window, there's a list of detectors designed to check for different problems. This section highlights a few.
+
+The *Functions App Down or Report Errors* detector pulls results from different detectors checking key areas of your application that may be the cause of your application being down or reporting errors. The screenshot below shows the checks performed (not all 15 are captured in the screenshot) and the two issues requiring attention.
+++
+Maximizing *High CPU Analysis* shows that one app is causing high CPU usage.
++
+The following is suggested when clicking "View Solutions". If you decide to follow the second option, you can easily restart your site by clicking the button.
++
+
+Maximizing *Memory Analysis* shows the following warning and graph. (Note that there's more content not captured in the screenshot.)
++
+The following is suggested when clicking "View Solutions". You can easily scale up by clicking a button.
+
azure-functions Functions Add Output Binding Azure Sql Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-azure-sql-vs-code.md
+
+ Title: Connect Azure Functions to Azure SQL Database using Visual Studio Code
+description: Learn how to connect Azure Functions to Azure SQL Database by adding an output binding to your Visual Studio Code project.
Last updated : 4/7/2023++++
+zone_pivot_groups: programming-languages-set-functions-temp
+ms.devlang: csharp, javascript
++
+# Connect Azure Functions to Azure SQL Database using Visual Studio Code
++
+This article shows you how to use Visual Studio Code to connect [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview) to the function you created in the previous quickstart article. The output binding that you add to this function writes data from the HTTP request to a table in Azure SQL Database.
+
+Before you begin, you must complete the [quickstart: Create a C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+Before you begin, you must complete the [quickstart: Create a JavaScript function in Azure using Visual Studio Code](create-first-function-vs-code-node.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+Before you begin, you must complete the [quickstart: Create a Python function in Azure using Visual Studio Code](create-first-function-vs-code-python.md). If you already cleaned up resources at the end of that article, go through the steps again to recreate the function app and related resources in Azure.
+
+More details on the settings for [Azure SQL bindings and trigger for Azure Functions](functions-bindings-azure-sql.md) are available in the Azure Functions documentation.
++
+## Create your Azure SQL Database
+
+1. Follow the [Azure SQL Database create quickstart](/azure/azure-sql/database/single-database-create-quickstart) to create a serverless Azure SQL Database. The database can be empty or created from the sample dataset AdventureWorksLT.
+
+1. Provide the following information at the prompts:
+
+ |Prompt| Selection|
+ |--|--|
+ |**Resource group**|Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). |
+ |**Database name**|Enter `mySampleDatabase`.|
+ |**Server name**|Enter a unique name for your server. We can't provide an exact server name to use because server names must be globally unique for all servers in Azure, not just unique within a subscription. |
+ |**Authentication method**|Select **SQL Server authentication**.|
+ |**Server admin login**|Enter `azureuser`.|
+ |**Password**|Enter a password that meets the complexity requirements.|
+ |**Allow Azure services and resources to access this server**|Select **Yes**.|
+
+1. Once the creation has completed, navigate to the database blade in the Azure portal, and, under **Settings**, select **Connection strings**. Copy the **ADO.NET** connection string for **SQL authentication**. Paste the connection string into a temporary document for later use.
+
+ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/adonet-connection-string.png" alt-text="Screenshot of copying the Azure SQL Database connection string in the Azure portal." border="true":::
+
+1. Create a table to store the data from the HTTP request. In the Azure portal, navigate to the database blade and select **Query editor**. Enter the following query to create a table named `dbo.ToDo`:
+
+ :::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
+
+1. Verify that your Azure Function will be able to access the Azure SQL Database by checking the [server's firewall settings](/azure/azure-sql/database/network-access-controls-overview#allow-azure-services). Navigate to the **server blade** on the Azure portal, and under **Security**, select **Networking**. The exception for **Allow Azure services and resources to access this server** should be checked.
+
+ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/manage-server-firewall.png" alt-text="Screenshot of checking the Azure SQL Database firewall settings in the Azure portal." border="true":::
+
+## Update your function app settings
+
+In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure. In this article, you update your app to write data to the Azure SQL Database you've just created. To connect to your Azure SQL Database, you must add its connection string to your app settings. You then download the new setting to your local.settings.json file so you can connect to your Azure SQL Database when running locally.
+
+1. Edit the connection string in the temporary document you created earlier. Replace the value of `Password` with the password you used when creating the Azure SQL Database. Copy the updated connection string.
+
+1. Press <kbd>Ctrl/Cmd+shift+P</kbd> to open the command palette, then search for and run the command `Azure Functions: Add New Setting...`.
+
+1. Choose the function app you created in the previous article. Provide the following information at the prompts:
+
+ |Prompt| Selection|
+ |--|--|
+ |**Enter new app setting name**| Type `SqlConnectionString`.|
+ |**Enter value for "SqlConnectionString"**| Paste the connection string of your Azure SQL Database you just copied.|
+
+ This creates an application setting named connection `SqlConnectionString` in your function app in Azure. Now, you can download this setting to your local.settings.json file.
+
+1. Press <kbd>Ctrl/Cmd+shift+P</kbd> again to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings...`.
+
+1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings.
+
+This downloads all of the setting from Azure to your local project, including the new connection string setting. Most of the downloaded settings aren't used when running locally.
+
+## Register binding extensions
+
+Because you're using an Azure SQL output binding, you must have the corresponding bindings extension installed before you run the project.
++
+With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure SQL extension package to your project.
+
+# [In-process](#tab/in-process)
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql
+```
+# [Isolated process](#tab/isolated-process)
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql
+```
+++
+Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages.
+
+Extension bundles usage is enabled in the host.json file at the root of the project, which appears as follows:
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+}
+```
+
+Now, you can add the Azure SQL output binding to your project.
+
+## Add an output binding
+
+In Functions, each type of binding requires a `direction`, `type`, and a unique `name` to be defined in the function.json file. The way you define these attributes depends on the language of your function app.
++
+Open the *HttpExample.cs* project file and add the following `ToDoItem` class, which defines the object that is written to the database:
++
+In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes.
+
+# [In-process](#tab/in-process)
+Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
++
+The `toDoItems` parameter is an `IAsyncCollector<ToDoItem>` type, which represents a collection of ToDo items that are written to your Azure SQL Database when the function completes. Specific attributes indicate the names of the database table (`dbo.ToDo`) and the connection string for your Azure SQL Database (`SqlConnectionString`).
+
+# [Isolated process](#tab/isolated-process)
+
+Open the *HttpExample.cs* project file and add the following output type class, which defines the combined objects that will be output from our function for both the HTTP response and the SQL output:
+
+```cs
+public static class OutputType
+{
+ [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")]
+ public ToDoItem ToDoItem { get; set; }
+ public HttpResponseData HttpResponse { get; set; }
+}
+```
+
+Add a using statement to the `Microsoft.Azure.Functions.Worker.Extensions.Sql` library to the top of the file:
+
+```cs
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+```
+++++
+Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The [Azure SQL output configuration](./functions-bindings-azure-sql-output.md#configuration) describes the fields required for an Azure SQL output binding.
+
+<!--The extension makes it easy to add bindings to the function.json file.
+
+To create a binding, right-click (Ctrl+click on macOS) the `function.json` file in your HttpTrigger folder and choose **Add binding...**. Follow the prompts to define the following binding properties for the new binding:
+
+| Prompt | Value | Description |
+| -- | -- | -- |
+| **Select binding direction** | `out` | The binding is an output binding. |
+| **Select binding with direction "out"** | `Azure SQL` | The binding is an Azure SQL binding. |
+| **The name used to identify this binding in your code** | `toDoItems` | Name that identifies the binding parameter referenced in your code. |
+| **The Azure SQL table where data will be written** | `dbo.ToDo` | The name of the Azure SQL table. |
+| **Select setting from "local.setting.json"** | `SqlConnectionString` | The name of an application setting that contains the connection string for the Azure SQL database. |
+
+A binding is added to the `bindings` array in your function.json, which should look like the following after removing any `undefined` values present. -->
+
+Add the following to the `bindings` array in your function.json.
+
+```json
+{
+ "type": "sql",
+ "direction": "out",
+ "name": "toDoItems",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+++
+The way that you define the new binding depends on your Python programming model.
+
+# [v1](#tab/v1)
+
+Binding attributes are defined directly in the function.json file. Depending on the binding type, additional properties may be required. The [Azure SQL output configuration](./functions-bindings-azure-sql-output.md#configuration) describes the fields required for an Azure SQL output binding.
+
+<!--The extension makes it easy to add bindings to the function.json file.
+
+To create a binding, right-click (Ctrl+click on macOS) the `function.json` file in your HttpTrigger folder and choose **Add binding...**. Follow the prompts to define the following binding properties for the new binding:
+
+| Prompt | Value | Description |
+| -- | -- | -- |
+| **Select binding direction** | `out` | The binding is an output binding. |
+| **Select binding with direction "out"** | `Azure SQL` | The binding is an Azure SQL binding. |
+| **The name used to identify this binding in your code** | `toDoItems` | Name that identifies the binding parameter referenced in your code. |
+| **The Azure SQL table where data will be written** | `dbo.ToDo` | The name of the Azure SQL table. |
+| **Select setting from "local.setting.json"** | `SqlConnectionString` | The name of an application setting that contains the connection string for the Azure SQL database. |
+
+A binding is added to the `bindings` array in your function.json, which should look like the following after removing any `undefined` values present. -->
+
+Add the following to the `bindings` array in your function.json.
+
+```json
+{
+ "type": "sql",
+ "direction": "out",
+ "name": "toDoItems",
+ "commandText": "dbo.ToDo",
+ "connectionStringSetting": "SqlConnectionString"
+}
+```
+
+# [v2](#tab/v2)
+
+Binding attributes are defined directly in the *function_app.py* file. You use the `generic_output_binding` decorator to add an [Azure SQL output binding](./functions-reference-python.md#outputs):
+
+```python
+@app.generic_output_binding(arg_name="toDoItems", type="sql", CommandText="dbo.ToDo", ConnectionStringSetting="SqlConnectionString"
+ data_type=DataType.STRING)
+```
+
+In this code, `arg_name` identifies the binding parameter referenced in your code, `type` denotes the output binding is a SQL output binding, `CommandText` is the table that the binding writes to, and `ConnectionStringSetting` is the name of an application setting that contains the Azure SQL connection string. The connection string is in the SqlConnectionString setting in the *local.settings.json* file.
+++++
+## Add code that uses the output binding
++
+# [In-process](#tab/in-process)
+
+Add code that uses the `toDoItems` output binding object to create a new `ToDoItem`. Add this code before the method returns.
+
+```csharp
+if (!string.IsNullOrEmpty(name))
+{
+ // Add a JSON document to the output container.
+ await toDoItems.AddAsync(new
+ {
+ // create a random ID
+ id = System.Guid.NewGuid().ToString(),
+ title = name,
+ completed = false,
+ url = ""
+ });
+}
+```
+
+At this point, your function should look as follows:
+
+```csharp
+[FunctionName("HttpExample")]
+public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+ [Sql(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ string name = req.Query["name"];
+
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ dynamic data = JsonConvert.DeserializeObject(requestBody);
+ name = name ?? data?.name;
+
+ if (!string.IsNullOrEmpty(name))
+ {
+ // Add a JSON document to the output container.
+ await toDoItems.AddAsync(new
+ {
+ // create a random ID
+ id = System.Guid.NewGuid().ToString(),
+ title = name,
+ completed = false,
+ url = ""
+ });
+ }
+
+ string responseMessage = string.IsNullOrEmpty(name)
+ ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
+ : $"Hello, {name}. This HTTP triggered function executed successfully.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+
+# [Isolated process](#tab/isolated-process)
+
+Replace the existing Run method with the following code:
+
+```cs
+[Function("HttpExample")]
+public static OutputType Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
+ FunctionContext executionContext)
+{
+ var logger = executionContext.GetLogger("HttpExample");
+ logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ var message = "Welcome to Azure Functions!";
+
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+ response.WriteString(message);
+
+ // Return a response to both HTTP trigger and Azure SQL output binding.
+ return new OutputType()
+ {
+ ToDoItem = new ToDoItem
+ {
+ id = System.Guid.NewGuid().ToString(),
+ title = message,
+ completed = false,
+ url = ""
+ },
+ HttpResponse = response
+ };
+}
+```
+++++
+Add code that uses the `toDoItems` output binding object on `context.bindings` to create a new item in the `dbo.ToDo` table. Add this code before the `context.res` statement.
+
+```javascript
+if (name) {
+ context.bindings.toDoItems = JSON.stringify([{
+ // create a random ID
+ id: crypto.randomUUID(),
+ Title: name,
+ completed: false,
+ url: ""
+ }]);
+}
+```
+
+To utilize the `crypto` module, add the following line to the top of the file:
+
+```javascript
+const crypto = require("crypto");
+```
+
+At this point, your function should look as follows:
+
+```javascript
+const crypto = require("crypto");
+
+module.exports = async function (context, req) {
+ context.log('JavaScript HTTP trigger function processed a request.');
+
+ const name = (req.query.name || (req.body && req.body.name));
+ const responseMessage = name
+ ? "Hello, " + name + ". This HTTP triggered function executed successfully."
+ : "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.";
+
+ if (name) {
+ context.bindings.toDoItems = JSON.stringify([{
+ // create a random ID
+ id: crypto.randomUUID(),
+ Title: name,
+ completed: false,
+ url: ""
+ }]);
+ }
+
+ context.res = {
+ // status: 200, /* Defaults to 200 */
+ body: responseMessage
+ };
+}
+```
++++
+# [v1](#tab/v1)
+
+Update *HttpExample\\\_\_init\_\_.py* to match the following code. Add an `import uuid` to the top of the file and add the `toDoItems` parameter to the function definition with `toDoItems.set()` under the `if name:` statement:
+
+```python
+import azure.functions as func
+import logging
+import uuid
+
+def main(req: func.HttpRequest, toDoItems: func.Out[func.SqlRow]) -> func.HttpResponse:
+
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ toDoItems.set(func.SqlRow({"id": uuid.uuid4(), "title": name, "completed": false, url: ""}))
+ return func.HttpResponse(f"Hello {name}!")
+ else:
+ return func.HttpResponse(
+ "Please pass a name on the query string or in the request body",
+ status_code=400
+ )
+```
++
+# [v2](#tab/v2)
+
+Update *HttpExample\\function_app.py* to match the following code. Add the `toDoItems` parameter to the function definition and `toDoItems.set()` under the `if name:` statement:
+
+```python
+import azure.functions as func
+import logging
+from azure.functions.decorators.core import DataType
+
+app = func.FunctionApp()
+
+@app.function_name(name="HttpTrigger1")
+@app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS)
+@app.generic_output_binding(arg_name="toDoItems", type="sql", CommandText="dbo.ToDo", ConnectionStringSetting="SqlConnectionString"
+ data_type=DataType.STRING)
+def test_function(req: func.HttpRequest, toDoItems: func.Out[func.SqlRow]) -> func.HttpResponse:
+ logging.info('Python HTTP trigger function processed a request.')
+ name = req.params.get('name')
+ if not name:
+ try:
+ req_body = req.get_json()
+ except ValueError:
+ pass
+ else:
+ name = req_body.get('name')
+
+ if name:
+ toDoItems.set(func.SqlRow({"id": uuid.uuid4(), "title": name, "completed": false, url: ""}))
+ return func.HttpResponse(f"Hello {name}!")
+ else:
+ return func.HttpResponse(
+ "Please pass a name on the query string or in the request body",
+ status_code=400
+ )
+```
+++++++
+## Run the function locally
+
+1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
+
+1. With Core Tools running, go to the **Azure: Functions** area. Under **Functions**, expand **Local Project** > **Functions**. Right-click (Ctrl-click on Mac) the `HttpExample` function and choose **Execute Function Now...**.
+
+ :::image type="content" source="../../includes/media/functions-run-function-test-local-vs-code/execute-function-now.png" alt-text="Screenshot of execute function now menu item from Visual Studio Code.":::
+
+1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function.
+
+1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools.
+
+### Verify that information has been written to the database
+
+1. On the Azure portal, go back to your Azure SQL Database and select **Query editor**.
+
+ :::image type="content" source="./media/functions-add-output-binding-azure-sql-vs-code/query-editor-login.png" alt-text="Screenshot of logging in to query editor on the Azure portal." border="true":::
+
+1. Connect to your database and expand the **Tables** node in object explorer on the left. Right-click on the `dbo.ToDo` table and select **Select Top 1000 Rows**.
+
+1. Verify that the new information has been written to the database by the output binding.
++
+## Redeploy and verify the updated app
+
+1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Deploy to function app...`.
+
+1. Choose the function app that you created in the first article. Because you're redeploying your project to the same app, select **Deploy** to dismiss the warning about overwriting files.
+
+1. After deployment completes, you can again use the **Execute Function Now...** feature to trigger the function in Azure.
+
+1. Again [check the data written to your Azure SQL Database](#verify-that-information-has-been-written-to-the-database) to verify that the output binding again generates a new JSON document.
+
+## Clean up resources
+
+In Azure, *resources* refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
+
+You created resources to complete these quickstarts. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them:
++
+## Next steps
+
+You've updated your HTTP triggered function to write data to Azure SQL Database. Now you can learn more about developing Functions using Visual Studio Code:
+++ [Develop Azure Functions using Visual Studio Code](functions-develop-vs-code.md)+++ [Azure SQL bindings and trigger for Azure Functions](functions-bindings-azure-sql.md)+++ [Azure Functions triggers and bindings](functions-triggers-bindings.md).++ [Examples of complete Function projects in C#](/samples/browse/?products=azure-functions&languages=csharp).+++ [Azure Functions C# developer reference](functions-dotnet-class-library.md) ++ [Examples of complete Function projects in JavaScript](/samples/browse/?products=azure-functions&languages=javascript).+++ [Azure Functions JavaScript developer guide](functions-reference-node.md) ++ [Examples of complete Function projects in Python](/samples/browse/?products=azure-functions&languages=python).+++ [Azure Functions Python developer guide](functions-reference-node.md)
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
description: Learn to use the Azure SQL input binding in Azure Functions.
Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace AzureSQLSamples
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")] HttpRequest req, [Sql(commandText: "select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
- commandText: System.Data.CommandType.Text,
+ commandType: System.Data.CommandType.Text,
parameters: "@Id={Query.id}", connectionStringSetting: "SqlConnectionString")] IEnumerable<ToDoItem> toDoItem)
The attribute's constructor takes the SQL command text, the command type, parame
Queries executed by the input binding are [parameterized](/dotnet/api/microsoft.data.sqlclient.sqlparameter) in Microsoft.Data.SqlClient to reduce the risk of [SQL injection](/sql/relational-databases/security/sql-injection) from the parameter values passed into the binding.
+If an exception occurs when a SQL input binding is executed then the function code will not execute. This may result in an error code being returned, such as an HTTP trigger returning a 500 error code.
+ ::: zone-end
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
description: Learn to use the Azure SQL output binding in Azure Functions.
Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
This section contains the following examples:
* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop) * [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c-oop)
-* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c-oop)
The examples refer to a `ToDoItem` class and a corresponding database table:
The examples refer to a `ToDoItem` class and a corresponding database table:
:::code language="sql" source="~/functions-sql-todo-sample/sql/create.sql" range="1-7":::
+To return [multiple output bindings](./dotnet-isolated-process-guide.md#multiple-output-bindings) in our samples, we will create a custom return type:
+
+```cs
+public static class OutputType
+{
+ [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")]
+ public ToDoItem ToDoItem { get; set; }
+ public HttpResponseData HttpResponse { get; set; }
+}
+```
<a id="http-trigger-write-one-record-c-oop"></a> ### HTTP trigger, write one record
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body. The return object is the `OutputType` class we created to handle both an HTTP response and the SQL output binding.
```cs using System;
namespace AzureSQL.ToDo
// create a new ToDoItem from body object // uses output binding to insert new item into ToDo table [FunctionName("PostToDo")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
- ILogger log,
- [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems)
+ public static async Task<OutputType> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req,
+ FunctionContext executionContext)
{
+ var logger = executionContext.GetLogger("HttpExample");
+ logger.LogInformation("C# HTTP trigger function processed a request.");
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
namespace AzureSQL.ToDo
toDoItem.completed = false; }
- await toDoItems.AddAsync(toDoItem);
- await toDoItems.FlushAsync();
- List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
-
- return new OkObjectResult(toDoItemList);
+ return new OutputType()
+ {
+ ToDoItem = toDoItem,
+ HttpResponse = req.CreateResponse(System.Net.HttpStatusCode.Created)
+ }
} }+
+ public static class OutputType
+ {
+ [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")]
+ public ToDoItem ToDoItem { get; set; }
+
+ public HttpResponseData HttpResponse { get; set; }
+ }
} ```
CREATE TABLE dbo.RequestLog (
) ```
+To use an additional output binding, we add a class for `RequestLog` and modify our `OutputType` class:
```cs using System;
namespace AzureSQL.ToDo
// create a new ToDoItem from body object // uses output binding to insert new item into ToDo table [FunctionName("PostToDo")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
- ILogger log,
- [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
- [SqlOutput(commandText: "dbo.RequestLog", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
+ public static async Task<OutputType> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequestData req,
+ FunctionContext executionContext)
{ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
namespace AzureSQL.ToDo
toDoItem.completed = false; }
- await toDoItems.AddAsync(toDoItem);
- await toDoItems.FlushAsync();
- List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
- requestLog = new RequestLog(); requestLog.RequestTimeStamp = DateTime.Now; requestLog.ItemCount = 1;
- await requestLogs.AddAsync(requestLog);
- await requestLogs.FlushAsync();
- return new OkObjectResult(toDoItemList);
+ return new OutputType()
+ {
+ ToDoItem = toDoItem,
+ RequestLog = requestLog,
+ HttpResponse = req.CreateResponse(System.Net.HttpStatusCode.Created)
+ }
} }
namespace AzureSQL.ToDo
public DateTime RequestTimeStamp { get; set; } public int ItemCount { get; set; } }
-}
-```
-
-<a id="http-trigger-write-records-using-iasynccollector-c-oop"></a>
-
-### HTTP trigger, write records using IAsyncCollector
-
-The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array.
-
-```cs
-using Microsoft.AspNetCore.Http;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Azure.Functions.Worker;
-using Microsoft.Azure.Functions.Worker.Extensions.Sql;
-using Microsoft.Azure.Functions.Worker.Http;
-using Newtonsoft.Json;
-using System.IO;
-using System.Threading.Tasks;
-
-namespace AzureSQLSamples
-{
- public static class WriteRecordsAsync
+
+ public static class OutputType
{
- [FunctionName("WriteRecordsAsync")]
- public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")]
- HttpRequest req,
- [SqlOutput(commandText: "dbo.ToDo", connectionStringSetting: "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
- {
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
- foreach (ToDoItem newItem in incomingItems)
- {
- await newItems.AddAsync(newItem);
- }
- // Rows are upserted here
- await newItems.FlushAsync();
+ [SqlOutput("dbo.ToDo", connectionStringSetting: "SqlConnectionString")]
+ public ToDoItem ToDoItem { get; set; }
- return new CreatedResult($"/api/addtodo-asynccollector", "done");
- }
+ [SqlOutput("dbo.RequestLog", connectionStringSetting: "SqlConnectionString")]
+ public RequestLog RequestLog { get; set; }
+
+ public HttpResponseData HttpResponse { get; set; }
}+ } ``` ++ # [C# Script](#tab/csharp-script)
The following table explains the binding configuration properties that you set i
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" The `CommandText` property is the name of the table where the data is to be stored. The connection string setting name corresponds to the application setting that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.
-The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+The output bindings use the T-SQL [MERGE](/sql/t-sql/statements/merge-transact-sql) statement which requires [SELECT](/sql/t-sql/statements/merge-transact-sql#permissions) permissions on the target database.
+
+If an exception occurs when a SQL output binding is executed then the function code stop executing. This may result in an error code being returned, such as an HTTP trigger returning a 500 error code. If the `IAsyncCollector` is used in a .NET function then the function code can handle exceptions throw by the call to `FlushAsync()`.
+ ::: zone-end
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
For configuration details for change tracking for use with the Azure SQL trigger
## Functionality Overview
-The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level the loop looks like this:
+The Azure SQL Trigger binding uses a polling loop to check for changes, triggering the user function when changes are detected. At a high level, the loop looks like this:
``` while (true) {
while (true) {
} ```
-Changes will always be processed in the order that their changes were made, with the oldest changes being processed first. A couple notes about this:
+Changes are processed in the order that their changes were made, with the oldest changes being processed first. A couple notes about change processing:
-1. If changes to multiple rows are made at once the exact order that they'll be sent to the function is based on the order returned by the CHANGETABLE function
-2. Changes are "batched" together for a row - if multiple changes are made to a row between each iteration of the loop then only a single change entry will exist for that row that shows the difference between the last processed state and the current state
-3. If changes are made to a set of rows, and then another set of changes are made to half of those same rows then the half that wasn't changed a second time will be processed first. This is due to the above note with the changes being batched - the trigger will only see the "last" change made and use that for the order it processes them in
+1. If changes to multiple rows are made at once the exact order that they are sent to the function is based on the order returned by the CHANGETABLE function
+2. Changes are "batched" together for a row. If multiple changes are made to a row between each iteration of the loop then only a single change entry exists for that row which will show the difference between the last processed state and the current state
+3. If changes are made to a set of rows, and then another set of changes are made to half of those same rows, then the half of the rows that weren't changed a second time are processed first. This processing logic is due to the above note with the changes being batched - the trigger will only see the "last" change made and use that for the order it processes them in
-See [Work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) for more information on change tracking and how it's used by applications such as Azure SQL triggers.
+For more information on change tracking and how it's used by applications such as Azure SQL triggers, see [work with change tracking](/sql/relational-databases/track-changes/work-with-change-tracking-sql-server) .
## Example usage
ALTER TABLE [dbo].[ToDo]
ENABLE CHANGE_TRACKING; ```
-The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with 2 properties:
+The SQL trigger binds to a `IReadOnlyList<SqlChange<T>>`, a list of `SqlChange` objects each with two properties:
- **Item:** the item that was changed. The type of the item should follow the table schema as seen in the `ToDoItem` class. - **Operation:** a value from `SqlChangeOperation` enum. The possible values are `Insert`, `Update`, and `Delete`.
The [C# library](functions-dotnet-class-library.md) uses the [SqlTrigger](https:
| Attribute property |Description| |||
-| **TableName** | Required. The name of the table being monitored by the trigger. |
-| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database which contains the table being monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
+| **TableName** | Required. The name of the table monitored by the trigger. |
+| **ConnectionStringSetting** | Required. The name of an app setting that contains the connection string for the database containing the table monitored for changes. The connection string setting name corresponds to the application setting (in `local.settings.json` for local development) that contains the [connection string](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString) to the Azure SQL or SQL Server instance.|
## Configuration
In addition to the required ConnectionStringSetting [application setting](./func
| App Setting | Description| |||
-|**Sql_Trigger_BatchSize** |This controls the maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.|
-|**Sql_Trigger_PollingIntervalMs**|This controls the delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).|
-|**Sql_Trigger_MaxChangesPerWorker**|This controls the upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.|
+|**Sql_Trigger_BatchSize** |The maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.|
+|**Sql_Trigger_PollingIntervalMs**|The delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).|
+|**Sql_Trigger_MaxChangesPerWorker**|The upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it may result in a scale-out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.|
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
Setting up change tracking for use with the Azure SQL trigger requires two steps
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); ```
- The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, it will only be able to catch the changes that occurred in past two days with the above query.
+ The `CHANGE_RETENTION` option specifies the time period for which change tracking information (change history) is kept. The retention of change history by the SQL database may affect the trigger functionality. For example, if the Azure Function is turned off for several days and then resumed, the database will contain the changes that occurred in past two days in the above setup example.
The `AUTO_CLEANUP` option is used to enable or disable the clean-up task that removes old change tracking information. If a temporary problem that prevents the trigger from running, turning off auto cleanup can be useful to pause the removal of information older than the retention period until the problem is resolved.
Setting up change tracking for use with the Azure SQL trigger requires two steps
ENABLE CHANGE_TRACKING; ```
- The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger will have associated change tracking table and leases table in a schema `az_func`, which are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/BindingsOverview.md#internal-state-tables).
+ The trigger needs to have read access on the table being monitored for changes and to the change tracking system tables. Each function trigger has an associated change tracking table and leases table in a schema `az_func`. These tables are created by the trigger if they don't yet exist. More information on these data structures is available in the Azure SQL binding library [documentation](https://github.com/Azure/azure-functions-sql-extension/blob/main/docs/BindingsOverview.md#internal-state-tables).
## Enable runtime-driven scaling
Optionally, your functions can scale automatically based on the number of change
[!INCLUDE [functions-runtime-scaling](../../includes/functions-runtime-scaling.md)]
+## Retry support
+
+Further information on the SQL trigger [retry support](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/BindingsOverview.md#retry-support-for-trigger-bindings) and [leases tables](https://github.com/Azure/azure-functions-sql-extension/blob/release/trigger/docs/TriggerBinding.md#internal-state-tables) is available in the GitHub repository.
+
+### Startup retries
+If an exception occurs during startup then the host runtime automatically attempts to restart the trigger listener with an exponential backoff strategy. These retries continue until either the listener is successfully started or the startup is canceled.
+
+### Broken connection retries
+If the function successfully starts but then an error causes the connection to break (such as the server going offline) then the function continues to try and reopen the connection until the function is either stopped or the connection succeeds. If the connection is successfully re-established then it picks up processing changes where it left off.
+
+Note that these retries are outside the built-in idle connection retry logic that SqlClient has which can be configured with the `ConnectRetryCount` and `ConnectRetryInterval` [connection string options](/dotnet/api/microsoft.data.sqlclient.sqlconnection.connectionstring?view=sqlclient-dotnet-core-5.0&preserve-view=true#Microsoft_Data_SqlClient_SqlConnection_ConnectionString). The built-in idle connection retries are attempted first and if those fail to reconnect then the trigger binding attempts to re-establish the connection itself.
+
+### Function exception retries
+If an exception occurs in the user function when processing changes then the batch of rows currently being processed are retried again in 60 seconds. Other changes are processed as normal during this time, but the rows in the batch that caused the exception are ignored until the timeout period has elapsed.
+
+If the function execution fails five times in a row for a given row then that row is completely ignored for all future changes. Because the rows in a batch are not deterministic, rows in a failed batch may end up in different batches in subsequent invocations. This means that not all rows in the failed batch will necessarily be ignored. If other rows in the batch were the ones causing the exception, the "good" rows may end up in a different batch that doesn't fail in future invocations.
## Next steps
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
description: Understand how to use Azure SQL bindings in Azure Functions.
Previously updated : 11/10/2022 Last updated : 4/7/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL bindings for Azure Functions overview (preview)
-> [!NOTE]
-> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not supported. Azure SQL input/output bindings are supported for all plans.
- This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input bindings, output bindings, and a function trigger for the Azure SQL and SQL Server products. | Action | Type |
Azure SQL bindings for Azure Functions have a required property for the connecti
- `Authentication` allows a function to connect to Azure SQL with Azure Active Directory, including [Active Directory Managed Identity](./functions-identity-access-azure-sql-with-managed-identity.md) - `Command Timeout` allows a function to wait for specified amount of time in seconds before terminating a query (default 30 seconds) - `ConnectRetryCount` allows a function to automatically make additional reconnection attempts, especially applicable to Azure SQL Database serverless tier (default 1)-
+- `Pooling` allows a function to reuse connections to the database, which can improve performance (default `true`). Additional settings for connection pooling include `Connection Lifetime`, `Max Pool Size`, and `Min Pool Size`. Learn more about connection pooling in the [ADO.NET documentation](/sql/connect/ado-net/sql-server-connection-pooling)
## Considerations
azure-health-insights Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md
+
+ Title: Deploy Project Health Insights using the Azure portal
+
+description: This article describes how to deploy Project Health Insights in the Azure portal.
+++++ Last updated : 01/26/2023++++
+# Quickstart: Deploy Project Health Insights using the Azure portal
+
+In this quickstart, you learn how to deploy Project Health Insights using the Azure portal.
+
+Once deployment is complete, you can use the Azure portal to navigate to the newly created Project Health Insights, and retrieve the needed details such your service URL, keys and manage your access controls.
+
+## Deploy Project Health Insights
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Create a new **Resource group**.
+3. Add a new Cognitive Services account to your Resource group and search for **Health Insights**.
+
+ ![Screenshot of how to create the new Project Health Insights service.](media/create-service.png)
+
+ or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Cognitive Services account.
+
+4. Enter the following values:
+ - **Resource group**: Select or create your Resource group name.
+ - **Region**: Select an Azure location, such as West Europe.
+ - **Name**: Enter a Cognitive Services account name.
+ - **Pricing tier**: Select your pricing tier.
+
+ ![Screenshot of how to create new Cognitive Services account.](media/create-health-insights.png)
+
+5. Navigate to your newly created service.
+
+ ![Screenshot of the Overview of Cognitive Services account.](media/created-health-insights.png)
+
+## Configure private endpoints
+
+With private endpoints, the network traffic between the clients on the VNet and the Cognitive Services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet.
+
+Once the Cognitive Services account is successfully created, configure private endpoints from the Networking page under Resource Management.
+
+![Screenshot of Private Endpoint.](media/private-endpoints.png)
+
+## Next steps
+
+To get started using Project Health Insights, get started with one of the following models:
+
+>[!div class="nextstepaction"]
+> [Onco Phenotype](oncophenotype/index.yml)
+
+>[!div class="nextstepaction"]
+> [Trial Matcher](trial-matcher/index.yml)
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/faq.md
+
+ Title: Onco Phenotype frequently asked questions
+
+description: Onco Phenotype frequently asked questions
+++++ Last updated : 02/02/2023++++
+# Onco Phenotype Frequently Asked Questions
+
+- What does inference value `None` mean?
+
+ `None` implies that the model couldn't find enough relevant information to make a meaningful prediction.
+
+- How is the `description` property populated for tumor site inference?
+
+ It's populated based on ICD-O-3 SEER Site/Histology Validation List [here](https://seer.cancer.gov/icd-o-3/).
+
+- Do you support behavior code along with histology code?
+
+ No, only four digit histology code is supported.
+
+- What does inference value `N+` mean for clinical/pathologic N category? Why don't you have `N1, N2, N3` inference values?
+
+ `N+` means there's involvement of regional lymph nodes without explicitly mentioning the extent of spread. Microsoft has trained the models to classify whether or not there's regional lymph node involvement but not the extent of spread and hence `N1, N2, N3` inference values aren't supported.
+
+- Do you support subcategories for clinical/pathologic TNM categories?
+
+ No, subcategories or isolated tumor cell modifiers aren't supported. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0.
+
+- Do you have plans to support I-IV stage grouping?
+
+ No, Microsoft doesn't have any plans to support I-IV stage grouping at this time.
+
+- Do you check if the tumor site and histology inference values are a valid combination?
+
+ No, the OncoPhenotype API doesn't validate if the tumor site and histology inference values are a valid combination.
+
+- Are the inference values exhaustive for tumor site and histology?
+
+ No, the inference values are only as exhaustive as the training data set labels.
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md
+
+ Title: Use Onco Phenotype
+
+description: This article describes how to use the Onco Phenotype
+++++ Last updated : 01/26/2023++++
+# Quickstart: Use the Onco Phenotype model
+
+This quickstart provides an overview on how to use the Onco Phenotype.
+
+## Prerequisites
+To use the Onco Phenotype model, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md)
+
+Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/.
++
+## Example request and results
+
+To send an API request, you need your Cognitive Services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md)
+
+![Screenshot of the Keys and Endpoints for the Onco Phenotype.](../media/keys-and-endpoints.png)
+
+> [!IMPORTANT]
+> Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+## Example request
+
+### Starting with a request that contains a case
+
+You can use the data from this example, to test your first request to the Onco Phenotype model.
+
+```url
+POST http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs?api-version=2023-03-01-preview
+Content-Type: application/json
+Ocp-Apim-Subscription-Key: {cognitive-services-account-key}
+```
+```json
+{
+ "configuration": {
+ "checkForCancerCase": true,
+ "includeEvidence": false
+ },
+ "patients": [
+ {
+ "id": "patient1",
+ "data": [
+ {
+ "kind": "note",
+ "clinicalType": "pathology",
+ "id": "document1",
+ "language": "en",
+ "createdDateTime": "2022-01-01T00:00:00",
+ "content": {
+ "sourceType": "inline",
+ "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+### Evaluating a response that contains a case
+
+You get the status of the job by sending a request to the Onco Phenotype model and adding the job ID from the initial request in the URL, as seen in the code snippet:
+
+```url
+GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jobs/385903b2-ab21-4f9e-a011-43b01f78f04e?api-version=2023-03-01-preview
+```
+
+```json
+{
+ "results": {
+ "patients": [
+ {
+ "id": "patient1",
+ "inferences": [
+ {
+ "kind": "tumorSite",
+ "value": "C50.2",
+ "description": "BREAST",
+ "confidenceScore": 0.9214
+ },
+ {
+ "kind": "histology",
+ "value": "8500",
+ "confidenceScore": 0.9973
+ },
+ {
+ "kind": "clinicalStageT",
+ "value": "T1",
+ "confidenceScore": 0.9956
+ },
+ {
+ "kind": "clinicalStageN",
+ "value": "N0",
+ "confidenceScore": 0.9931
+ },
+ {
+ "kind": "clinicalStageM",
+ "value": "None",
+ "confidenceScore": 0.5217
+ },
+ {
+ "kind": "pathologicStageT",
+ "value": "T1",
+ "confidenceScore": 0.9477
+ },
+ {
+ "kind": "pathologicStageN",
+ "value": "N0",
+ "confidenceScore": 0.7927
+ },
+ {
+ "kind": "pathologicStageM",
+ "value": "M0",
+ "confidenceScore": 0.9208
+ }
+ ]
+ }
+ ],
+ "modelVersion": "2023-03-01-preview"
+ },
+ "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e",
+ "createdDateTime": "2023-03-08T17:02:46Z",
+ "expirationDateTime": "2023-03-08T17:19:26Z",
+ "lastUpdateDateTime": "2023-03-08T17:02:53Z",
+ "status": "succeeded"
+}
+```
+
+More information on the [response information can be found here](../response-info.md)
+
+## Request validation
+
+Every request has required and optional fields that should be provided to the Onco Phenotype model.
+When you're sending data to the model, make sure that you take the following properties into account:
+
+Within a request:
+- ```patients``` should be set
+- ```patients``` should contain at least one entry
+- ```id``` in patients entries should be unique
+
+For each patient:
+- ```data``` should be set
+- ```data``` should contain at least one document of clinical type ```pathology```
+- ```id``` in data entries should be unique
+
+For each clinical document within a patient:
+- ```createdDateTime``` should be set
+- if set, ```language``` should be ```en``` (default is ```en``` if not set)
+- ```documentType``` should be set to ```Note```
+- ```clinicalType``` should be set to one of ```imaging```, ```pathology```, ```procedure```, ```progress```
+- content ```sourceType``` should be set to ```inline```
+
+## Data limits
+
+| **Limit** | **Value** |
+| - | -- |
+| Maximum # patients per request | 1 |
+| Maximum # characters per patient | 50,000 for data[i].content.value all combined |
++
+## Next steps
+
+To get better insights into the request and responses, you can read more on following pages:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/inferences.md
+
+ Title: Onco Phenotype inference information
+
+description: This article provides Onco Phenotype inference information.
+++++ Last updated : 01/26/2023++++
+# Onco Phenotype inference information
+
+Project Health Insights Onco Phenotype model was trained with labels that conform to the following standards.
+- Tumor site and histology inferences: **WHO ICD-O-3** representation.
+- Clinical and pathologic stage TNM category inferences: **American Joint Committee on Cancer (AJCC)'s 7th edition** of the cancer staging manual.
+
+You can find an overview of the response values here:
+
+**Inference type** |**Description** |**Values**
+-|--|-
+tumorSite |The tumor site |`None, ICD-O-3 tumor site code (e.g. C34.2)`
+histology |The histology code |`None, 4-digit ICD-O-3 histology code`
+clinicalStageT |The T category of the clinical stage |`None, T0, Tis, T1, T2, T3, T4`
+clinicalStageN |The N category of the clinical stage |`None, N0, N+`
+clinicalStageM |The M category of the clinical stage |`None, M0, M1`
+pathologicStageT |The T category of the pathologic stage|`None, T0, Tis, T1, T2, T3, T4`
+pathologicStageN |The N category of the pathologic stage|`None, N0, N+`
+pathologicStageM |The M category of the pathologic stage|`None, M0, M1`
++
+## Confidence score
+
+Each inference has an attribute called ```confidenceScore``` that expresses the confidence level for the inference value, ranging from 0 to 1. The higher the confidence score is, the more certain the model was about the inference value provided. The inference values should **not** be consumed without human review, no matter how high the confidence score is.
+
+## Importance
+
+When you set the ```includeEvidence``` property to ```true```, each evidence property has an ```importance``` attribute that expresses how important that evidence was to predicting the inference value, ranging from 0 to 1. A higher importance value indicates that the model relied more on that specific evidence.
+
+## Next steps
+
+To get better insights into the request and responses, read more on following page:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/model-configuration.md
+
+ Title: Onco Phenotype model configuration
+
+description: This article provides Onco Phenotype model configuration information.
+++++ Last updated : 01/26/2023++++
+# Onco Phenotype model configuration
+
+To interact with the Onco Phenotype model, you can provide several model configurations parameters that modify the outcome of the responses.
+
+> [!IMPORTANT]
+> Model configuration is applied to ALL the patients within a request.
+
+```json
+"configuration": {
+ "checkForCancerCase": false,
+ "includeEvidence": false
+}
+```
+
+## Case finding
++
+The Onco Phenotype model configuration helps you find if any cancer cases exist. The API allows you to explicitly check if a cancer case exists in the provided clinical documents.
+
+**Check for cancer case** |**Did the model find a case?** |**Behavior**
+- |--|-
+true |Yes |Inferences are returned
+true |No |No inferences are returned
+false |N/A |Inferences are always returned but they aren't meaningful if there's no cancer case.
+
+Set ```checkForCancerCase``` to ```false``` if
+- you're sure that the provided clinical documents definitely contain a case
+- the model is unable to find a case in a valid scenario
+
+If a case is found in the provided clinical documents and the model is able to find that case, the inferences are always returned.
+
+## Case finding examples
+
+### With case finding
+
+The following example represents a case finding. The ```checkForCancerCase``` has been set to ```true``` and ```includeEvidence``` has been set to ```false```. Meaning the model checks for a cancer case but not include the evidence.
+
+Request that contains a case:
+```json
+{
+ "configuration": {
+ "checkForCancerCase": true,
+ "includeEvidence": false
+ },
+ "patients": [
+ {
+ "id": "patient1",
+ "data": [
+ {
+ "kind": "note",
+ "clinicalType": "pathology",
+ "id": "document1",
+ "language": "en",
+ "createdDateTime": "2022-01-01T00:00:00",
+ "content": {
+ "sourceType": "inline",
+ "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+Response:
+```json
+{
+ "results": {
+ "patients": [
+ {
+ "id": "patient1",
+ "inferences": [
+ {
+ "kind": "tumorSite",
+ "value": "C50.2",
+ "description": "BREAST",
+ "confidenceScore": 0.9214
+ },
+ {
+ "kind": "histology",
+ "value": "8500",
+ "confidenceScore": 0.9973
+ },
+ {
+ "kind": "clinicalStageT",
+ "value": "T1",
+ "confidenceScore": 0.9956
+ },
+ {
+ "kind": "clinicalStageN",
+ "value": "N0",
+ "confidenceScore": 0.9931
+ },
+ {
+ "kind": "clinicalStageM",
+ "value": "None",
+ "confidenceScore": 0.5217
+ },
+ {
+ "kind": "pathologicStageT",
+ "value": "T1",
+ "confidenceScore": 0.9477
+ },
+ {
+ "kind": "pathologicStageN",
+ "value": "N0",
+ "confidenceScore": 0.7927
+ },
+ {
+ "kind": "pathologicStageM",
+ "value": "M0",
+ "confidenceScore": 0.9208
+ }
+ ]
+ }
+ ],
+ "modelVersion": "2023-03-01-preview"
+ },
+ "jobId": "385903b2-ab21-4f9e-a011-43b01f78f04e",
+ "createdDateTime": "2023-03-08T17:02:46Z",
+ "expirationDateTime": "2023-03-08T17:19:26Z",
+ "lastUpdateDateTime": "2023-03-08T17:02:53Z",
+ "status": "succeeded"
+}
+```
+Request that does not contain a case:
+```json
+{
+ "configuration": {
+ "checkForCancerCase": true,
+ "includeEvidence": false
+ },
+ "patients": [
+ {
+ "id": "patient1",
+ "data": [
+ {
+ "kind": "note",
+ "clinicalType": "pathology",
+ "id": "document1",
+ "language": "en",
+ "createdDateTime": "2022-01-01T00:00:00",
+ "content": {
+ "sourceType": "inline",
+ "value": "Test document"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+Response:
+```json
+{
+ "results": {
+ "patients": [
+ {
+ "id": "patient1",
+ "inferences": []
+ }
+ ],
+ "modelVersion": "2023-03-01-preview"
+ },
+ "jobId": "abe71219-b3ce-4def-9e12-3dc511096c88",
+ "createdDateTime": "2023-03-08T17:05:23Z",
+ "expirationDateTime": "2023-03-08T17:22:03Z",
+ "lastUpdateDateTime": "2023-03-08T17:05:23Z",
+ "status": "succeeded"
+}
+```
+
+## Evidence
+
+Through the model configuration, the API allows you to seek evidence from the provided clinical documents as part of the inferences.
+
+**Include evidence** | **Behavior**
+- | -
+true | Evidence is returned as part of each inference
+false | No evidence is returned
++
+## Evidence example
+
+The following example represents a case finding. The ```checkForCancerCase``` has been set to ```true``` and ```includeEvidence``` has been set to ```true```. Meaning the model checks for a cancer case and include the evidence.
+
+Request that contains a case:
+```json
+{
+ "configuration": {
+ "checkForCancerCase": true,
+ "includeEvidence": true
+ },
+ "patients": [
+ {
+ "id": "patient1",
+ "data": [
+ {
+ "kind": "note",
+ "clinicalType": "pathology",
+ "id": "document1",
+ "language": "en",
+ "createdDateTime": "2022-01-01T00:00:00",
+ "content": {
+ "sourceType": "inline",
+ "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+Response:
+```json
+{
+ "results": {
+ "patients": [
+ {
+ "id": "patient1",
+ "inferences": [
+ {
+ "type": "tumorSite",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Upper inner",
+ "offset": 108,
+ "length": 11
+ },
+ "importance": 0.5563
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "duct",
+ "offset": 68,
+ "length": 4
+ },
+ "importance": 0.0156
+ }
+ ],
+ "value": "C50.2",
+ "description": "BREAST",
+ "confidenceScore": 0.9214
+ },
+ {
+ "type": "histology",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Ductal",
+ "offset": 174,
+ "length": 6
+ },
+ "importance": 0.2937
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct",
+ "offset": 43,
+ "length": 13
+ },
+ "importance": 0.2439
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 193,
+ "length": 8
+ },
+ "importance": 0.1588
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "duct",
+ "offset": 68,
+ "length": 4
+ },
+ "importance": 0.1483
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "solid",
+ "offset": 368,
+ "length": 5
+ },
+ "importance": 0.0694
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Cribriform",
+ "offset": 353,
+ "length": 10
+ },
+ "importance": 0.043
+ }
+ ],
+ "value": "8500",
+ "confidenceScore": 0.9973
+ },
+ {
+ "type": "clinicalStageT",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct carcinoma; duct",
+ "offset": 43,
+ "length": 29
+ },
+ "importance": 0.2613
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 193,
+ "length": 8
+ },
+ "importance": 0.1341
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Laterality: Left",
+ "offset": 0,
+ "length": 17
+ },
+ "importance": 0.0874
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive",
+ "offset": 133,
+ "length": 8
+ },
+ "importance": 0.0722
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "situ",
+ "offset": 86,
+ "length": 4
+ },
+ "importance": 0.0651
+ }
+ ],
+ "value": "T1",
+ "confidenceScore": 0.9956
+ },
+ {
+ "type": "clinicalStageN",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct carcinoma; duct carcinoma in situ",
+ "offset": 43,
+ "length": 47
+ },
+ "importance": 0.1529
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive carcinoma: Extensive",
+ "offset": 423,
+ "length": 30
+ },
+ "importance": 0.0782
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive",
+ "offset": 133,
+ "length": 8
+ },
+ "importance": 0.0715
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 95,
+ "length": 5
+ },
+ "importance": 0.0513
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Left",
+ "offset": 13,
+ "length": 4
+ },
+ "importance": 0.0325
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 22,
+ "length": 5
+ },
+ "importance": 0.0174
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Histologic",
+ "offset": 156,
+ "length": 10
+ },
+ "importance": 0.0066
+ }
+ ],
+ "value": "N0",
+ "confidenceScore": 0.9931
+ },
+ {
+ "type": "clinicalStageM",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Laterality: Left",
+ "offset": 0,
+ "length": 17
+ },
+ "importance": 0.1579
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct",
+ "offset": 43,
+ "length": 13
+ },
+ "importance": 0.1493
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Histologic Grade - Nottingham",
+ "offset": 225,
+ "length": 29
+ },
+ "importance": 0.1038
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive",
+ "offset": 133,
+ "length": 8
+ },
+ "importance": 0.089
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "duct carcinoma",
+ "offset": 68,
+ "length": 14
+ },
+ "importance": 0.0807
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 423,
+ "length": 8
+ },
+ "importance": 0.057
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Extensive",
+ "offset": 444,
+ "length": 9
+ },
+ "importance": 0.0494
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 22,
+ "length": 5
+ },
+ "importance": 0.0311
+ }
+ ],
+ "value": "None",
+ "confidenceScore": 0.5217
+ },
+ {
+ "type": "pathologicStageT",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct",
+ "offset": 43,
+ "length": 13
+ },
+ "importance": 0.3125
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Left",
+ "offset": 13,
+ "length": 4
+ },
+ "importance": 0.201
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 193,
+ "length": 8
+ },
+ "importance": 0.1244
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 423,
+ "length": 8
+ },
+ "importance": 0.0961
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive",
+ "offset": 133,
+ "length": 8
+ },
+ "importance": 0.0623
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 22,
+ "length": 5
+ },
+ "importance": 0.0583
+ }
+ ],
+ "value": "T1",
+ "confidenceScore": 0.9477
+ },
+ {
+ "type": "pathologicStageN",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive component:",
+ "offset": 193,
+ "length": 19
+ },
+ "importance": 0.1402
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Nottingham combined histologic score:",
+ "offset": 244,
+ "length": 37
+ },
+ "importance": 0.1096
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive carcinoma",
+ "offset": 133,
+ "length": 18
+ },
+ "importance": 0.1067
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Ductal",
+ "offset": 174,
+ "length": 6
+ },
+ "importance": 0.0896
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct carcinoma;",
+ "offset": 43,
+ "length": 24
+ },
+ "importance": 0.0831
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Histologic",
+ "offset": 156,
+ "length": 10
+ },
+ "importance": 0.0447
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "in situ",
+ "offset": 83,
+ "length": 7
+ },
+ "importance": 0.042
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 22,
+ "length": 5
+ },
+ "importance": 0.0092
+ }
+ ],
+ "value": "N0",
+ "confidenceScore": 0.7927
+ },
+ {
+ "type": "pathologicStageM",
+ "evidence": [
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "In situ carcinoma (DCIS)",
+ "offset": 298,
+ "length": 24
+ },
+ "importance": 0.1111
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Nottingham combined histologic",
+ "offset": 244,
+ "length": 30
+ },
+ "importance": 0.0999
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive carcinoma:",
+ "offset": 423,
+ "length": 19
+ },
+ "importance": 0.0787
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "invasive",
+ "offset": 193,
+ "length": 8
+ },
+ "importance": 0.0617
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive duct carcinoma;",
+ "offset": 43,
+ "length": 24
+ },
+ "importance": 0.0594
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 22,
+ "length": 5
+ },
+ "importance": 0.0579
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "of DCIS:",
+ "offset": 343,
+ "length": 8
+ },
+ "importance": 0.0483
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Laterality:",
+ "offset": 0,
+ "length": 11
+ },
+ "importance": 0.0324
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Invasive carcinoma",
+ "offset": 133,
+ "length": 18
+ },
+ "importance": 0.0269
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "carcinoma in",
+ "offset": 73,
+ "length": 12
+ },
+ "importance": 0.0202
+ },
+ {
+ "patientDataEvidence": {
+ "id": "document1",
+ "text": "Tumor",
+ "offset": 95,
+ "length": 5
+ },
+ "importance": 0.0112
+ }
+ ],
+ "value": "M0",
+ "confidenceScore": 0.9208
+ }
+ ]
+ }
+ ],
+ "modelVersion": "2023-03-01-preview"
+ },
+ "jobId": "5f975105-6f11-4985-b5cd-896215fb5cd3",
+ "createdDateTime": "2023-03-08T17:10:39Z",
+ "expirationDateTime": "2023-03-08T17:27:19Z",
+ "lastUpdateDateTime": "2023-03-08T17:10:41Z",
+ "status": "succeeded"
+}
+```
+
+## Next steps
+
+Refer to the following page to get better insights into the request and responses:
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/overview.md
+
+ Title: What is Onco Phenotype (Preview)
+
+description: Enable healthcare organizations to rapidly identify key cancer attributes within their patient populations.
+++++ Last updated : 01/26/2023++++
+# What is Onco Phenotype (Preview)?
+
+Onco Phenotype is an AI model thatΓÇÖs offered within the context of the broader Project Health Insights. It augments traditional clinical natural language processing tools by enabling healthcare organizations to rapidly identify key cancer attributes within their patient populations.
++
+> [!IMPORTANT]
+> The Onco Phenotype model is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ The Onco Phenotype model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Onco Phenotype model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
++
+## Onco Phenotype features
+The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key attributes of a cancer within their patient populations with an existing cancer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
+
+- **Tumor site** refers to the primary tumor location.
+
+- **Histology** refers to the cell type of a given tumor.
+
+The following paragraph is adapted from [American Joint Committee on Cancer (AJCC)'s Cancer Staging System](https://www.facs.org/quality-programs/cancer/ajcc/cancer-staging).
+
+Cancer staging describes the severity of an individual's cancer based on the magnitude of the original tumor, as well as on the extent cancer has spread in the body. The Onco Phenotype model supports inferring two types of staging from the clinical documents - clinical staging and pathologic staging. TheyΓÇÖre both expressed in the form of TNM categories, where TNM indicates the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M).
+
+- **Clinical staging** determines the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas.
+
+- **Pathologic staging** can only be determined from individual patients who have had surgery to remove a tumor or otherwise explore the extent of the cancer. Pathologic staging combines the results of clinical staging (physical exam, imaging test) with surgical results.
+
+The Onco Phenotype model enables cancer registrars to efficiently abstract cancer patients as it infers the above-mentioned key cancer attributes from unstructured clinical documents along with evidence that are relevant to those attributes. Leveraging this API can reduce the manual time spent combing through large amounts of patient documentation by focusing on the most relevant content in support of a clinician.
++
+## Language support
+
+The service currently supports the English language.
+
+## Limits and quotas
+
+For the Public Preview, you can select the Free F0 SKU. The official pricing will be released after Public Preview.
+
+## Next steps
+
+Get started using the Onco Phenotype model:
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/patient-info.md
+
+ Title: Onco Phenotype patient info
+
+description: This article describes how and which patient information can be sent to the Onco Phenotype model
+++++ Last updated : 02/02/2023++++
+# Onco Phenotype patient info
+
+The Onco Phenotype currently can receive patient information in the form of unstructured clinical notes.
+The payload should contain a ```patients``` section with one or more objects where the ```data``` property contains one or more JSON object of ```kind``` "note".
+
+
+## Example request
+
+In this example, the Onco Phenotype model receives patient information in the form of unstructured clinical notes.
+
+```json
+{
+ "configuration": {
+ "checkForCancerCase": true,
+ "includeEvidence": false
+ },
+ "patients": [
+ {
+ "id": "patient1",
+ "data": [
+ {
+ "kind": "note",
+ "clinicalType": "pathology",
+ "id": "document1",
+ "language": "en",
+ "createdDateTime": "2022-01-01T00:00:00",
+ "content": {
+ "sourceType": "inline",
+ "value": "Laterality: Left \n Tumor type present: Invasive duct carcinoma; duct carcinoma in situ \n Tumor site: Upper inner quadrant \n Invasive carcinoma \n Histologic type: Ductal \n Size of invasive component: 0.9 cm \n Histologic Grade - Nottingham combined histologic score: 1 out of 3 \n In situ carcinoma (DCIS) \n Histologic type of DCIS: Cribriform and solid \n Necrosis in DCIS: Yes \n DCIS component of invasive carcinoma: Extensive \n"
+ }
+ }
+ ]
+ }
+ ]
+}
+```
+++
+## Next steps
+
+To get started using the Onco Phenotype model:
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/support-and-help.md
+
+ Title: Onco Phenotype support and help options
+
+description: How to obtain help and support for questions and problems when you create applications that use with Onco Phenotype model
+++++ Last updated : 02/02/2023++++
+# Onco Phenotype model support and help options
+
+Are you just starting to explore the functionality of the Onco Phenotype model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Project Health Insights.
+
+## Create an Azure support request
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal for the United States government](https://portal.azure.us)
++
+## Post a question on Microsoft Q&A
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/transparency-note.md
+
+ Title: Transparency Note for Onco Phenotype
+description: Transparency Note for Onco Phenotype
++++ Last updated : 04/11/2023+++
+# Transparency Note for Onco Phenotype
+
+## What is a Transparency Note?
+
+An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. MicrosoftΓÇÖs Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+
+MicrosoftΓÇÖs Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see theΓÇ»[Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai).
+
+## The basics of Onco Phenotype
+
+### Introduction
+
+The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, augments traditional clinical natural language processing (NLP) tools by helping healthcare providers rapidly identify key cancer attributes of a cancer within their patient populations with an existing cencer diagnosis. You can use this model to infer tumor site; histology; clinical stage tumor (T), lymph node (N), and metastasis (M) categories; and pathologic stage TNM categories from unstructured clinical documents, along with confidence scores and relevant evidence.
+
+### Key terms
+
+| Term | Definition |
+| | - |
+| Tumor site | The location of the primary tumor. |
+| Histology | The cell type of a given tumor. |
+| Clinical stage | Clinical stage helps users determine the nature and extent of cancer based on the physical examination, imaging tests, and biopsies of affected areas. |
+| Pathologic stage | Pathologic stage can be determined only from individual patients who have had surgery to remove a tumor or otherwise to explore the extent of the cancer. Pathologic stage combines the results of clinical stage (physical exam, imaging test) with surgical results. |
+| TNM categories | TNM categories indicate the extent of the tumor (T), the extent of spread to the lymph nodes (N), and the presence of metastasis (M). |
+| ICD-O-3 | _International Classification of Diseases for Oncology, Third Edition_. The worldwide standard coding system for cancer diagnoses. |
+
+## Capabilities
+
+### System behavior
+
+The Onco Phenotype model, available in the Project Health Insights cognitive service as an API, takes in unstructured clinical documents as input and returns inferences for cancer attributes along with confidence scores as output. Through the model configuration as part of the API request, it also allows the user to seek evidence with the inference values and to explicitly check for the existence of a cancer case before generating the inferences for cancer attributes.
++
+Upon receiving a valid API request to process the unstructured clinical documents, a job is created and the request is processed asynchronously. The status of the job and the inferences (upon successful job completion) can be accessed by using the job ID. The job results are available for only 24 hours and are purged thereafter.
+
+### Use cases
+
+#### Intended uses
+
+The Onco Phenotype model can be used in the following scenario. The systemΓÇÖs intended uses include:
+
+- **Assisted annotation and curation:** To support healthcare systems and cancer registrars identify and extract cancer attributes for regulatory purposes and for downstream tasks such as clinical trials matching, research cohort discovery, and molecular tumor board discussions.
+
+#### Considerations when choosing a use case
+
+We encourage customers to use the Onco Phenotype model in their innovative solutions or applications. However, here are some considerations when choosing a use case:
+
+- **Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law.** Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information.
+- **Facilitate human review and inference error corrections.** Given the sensitive nature of health information, it's essential that a human review the source data and correct any inference errors.
+- **Avoid scenarios that use this service as a medical device, for clinical support, or as a diagnostic tool or workflow without a human in the loop.** The system wasn't designed for use as a medical device, for clinical support, or as a diagnostic tool for the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions without human intervention. A qualified professional should always verify the inferences and relevant evidence before finalizing or relying on the information.
+
+## Limitations
+
+### Technical limitations, operational factors, and ranges
+
+Specific characteristics and limitations of the Onco Phenotype model include:
+
+- **Multiple cancer cases for a patient:** The model infers only a single set of phenotype values (tumor site, histology, and clinical/pathologic stage TNM categories) per patient. If the model is given an input with multiple primary cancer diagnoses, the behavior is undefined and might mix elements from the separate diagnoses.
+- **Inference values for tumor site and histology:** The inference values are only as exhaustive as the training dataset labels. If the model is presented with a cancer case for which the true tumor site or histology wasn't encountered during training (for example, a rare tumor site or histology), the model will be unable to produce a correct inference result.
+- **Clinical/pathologic stage (TNM categories):** The model doesn't currently identify the initiation of a patient's definitive treatment. Therefore, it might use clinical stage evidence to infer a pathologic stage value or vice-versa. Manual review should verify that appropriate evidence supports clinical and pathologic stage results. The model doesn't predict subcategories or isolated tumor cell modifiers. For instance, T3a would be predicted as T3, and N0(i+) would be predicted as N0.
+
+## System performance
+
+In many AI systems, performance is often defined in relation to accuracy or by how often the AI system offers a correct prediction or output. Depending on the workflow or scenario, you can leverage the confidence scores that are returned with each inference and choose to set thresholds based on the tolerance for incorrect inferences. The performance of the system can be assessed by computing statistics based on true positive, true negative, false positive, and false negative instances. For example, in the tumor site predictions, one can consider a tumor site (like lung) being the positive class and other sites, including not having one, being the negative class. Using the lung tumor site as an example positive class, the following table illustrates different outcomes.
+
+| **Outcome** | **Correct/Incorrect** | **Definition** | **Example** |
+| -- | | -- | -- |
+| True Positive | Correct | The system returns the tumor site as lung and that would be expected from a human judge. | The system correctly infers the tumor site as lung on the clinical documents of a lung cancer patient. |
+| True Negative | Correct | The system doesn't return the tumor site as lung, and this aligns with what would be expected from a human judge. | The system returns the tumor site as breast on the clinical documents of a breast cancer patient. |
+| False Positive | Incorrect | The system returns the tumor site as lung where a human judge wouldn't. | The system returns the tumor site as lung on the clinical documents of a breast cancer patient. |
+| False Negative | Incorrect | The system doesn't return the tumor site as lung where a human judge would identify it as lung. | The system returns the tumor site as breast on the clinical documents of a lung cancer patient. |
+
+### Best practices for improving system performance
+
+For each inference, the Onco Phenotype model returns a confidence score that expresses how confident the model is with the response. Confidence scores range from 0 to 1. The higher the confidence score, the more certain the model is about the inference value it provided. However, the system isn't designed for workflows or scenarios without a human in the loop. Also, inference values can't be consumed without human review, irrespective of the confidence score. You can choose to completely discard an inference value if its confidence score is below a confidence score threshold that best suits the scenario.
+
+## Evaluation of Onco Phenotype
+
+### Evaluation methods
+
+The Onco Phenotype model was evaluated on a held-out dataset that shares the same characteristics as the training dataset. The training and held-out datasets consist of patients located only in the United States. The patient races include White or Caucasian, Black or African American, Asian, Native Hawaiian or Pacific Islander, American Indian or Alaska native, and Other. During model development and training, a separate development dataset was used for error analysis and model improvement.
+
+### Evaluation results
+
+Although the Onco Phenotype model makes mistakes on the held-out dataset, it was observed that the inferences, and the evidence spans identified by the model are helpful in speeding up manual curation effort.
+
+Microsoft has also tested the generalizability of the model by evaluating the trained model on a secondary dataset that was collected from a different hospital system, and which was unavailable during training. A limited performance decrease was observed on the secondary dataset.
+
+#### Fairness considerations
+
+At Microsoft, we strive to empower every person on the planet to achieve more. An essential part of this goal is working to create technologies and products that are fair and inclusive. Fairness is a multi-dimensional, sociotechnical topic and impacts many different aspects of our product development. You can learn more about MicrosoftΓÇÖs approach to fairness [here](https://www.microsoft.com/ai/responsible-ai?rtc=1&activetab=pivot1:primaryr6).
+
+One dimension we need to consider is how well the system performs for different groups of people. This might include looking at the accuracy of the model and measuring the performance of the complete system. Research has shown that without conscious effort focused on improving performance for all groups, it's often possible for the performance of an AI system to vary across groups based on factors such as race, ethnicity, language, gender, and age.
+
+The evaluation performance of the Onco Phenotype model was stratified by race to ensure minimal performance discrepancy between different patient racial groups. The lowest performance by racial group is well within 80% of the highest performance by racial group. When the evaluation performance was stratified by gender, there was no significant difference.
+
+However, each use case is different, and our testing might not perfectly match your context or cover all scenarios that are required for your use case. We encourage you to thoroughly evaluate error rates for the service by using real-world data that reflects your use case, including testing with users from different demographic groups.
+
+## Evaluating and integrating Onco Phenotype for your use
+
+As Microsoft works to help customers safely develop and deploy solutions that use the Onco Phenotype model, we offer guidance for considering the AI systems' fairness, reliability & safety, privacy &security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing responsible AI.
+
+When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success:
+
+- **Understand what it can do:** Fully vet and review the capabilities of Onco Phenotype to understand its capabilities and limitations.
+- **Test with real, diverse data:** Understand how Onco Phenotype will perform in your scenario by thoroughly testing it by using real-life conditions and data that reflects the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
+- **Respect an individual's right to privacy:** Collect data and information from individuals only for lawful and justifiable purposes. Use data and information that you have consent to use only for this purpose.
+- **Legal review:** Obtain appropriate legal advice to review your solution, particularly if you'll use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.
+- **System review:** If you're planning to integrate and responsibly use an AI-powered product or feature in an existing system of software or in customer and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles.
+- **Human in the loop:** Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure that you can have real-time human intervention in the solution to prevent harm. This enables you to manage where the AI model doesn't perform as expected.
+- **Security:** Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.
+- **Customer feedback loop:** Provide a feedback channel that allows users and individuals to report issues with the service after it's deployed. After you've deployed an AI-powered product or feature, it requires ongoing monitoring and improvement. Be ready to implement any feedback and suggestions for improvement.
+
+## Learn more about responsible AI
+
+[Microsoft AI Principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+
+[Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+
+[Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/)
+
+## Learn more about Onco Phenotype
+
+[Overview of Onco Phenotype](overview.md)
+
+## Contact us
+
+[Give us feedback on this document](mailto:health-ai-feedback@microsoft.com).
+
+## About this document
+
+© 2023 Microsoft Corporation. All rights reserved. This document is provided "as-is" and for informational purposes only. Information and views expressed in this document, including URL and other Internet Web site references, may change without notice. You bear the risk of using it. Some examples are for illustration only and are fictitious. No real association is intended or inferred.
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md
+
+ Title: What is Project Health Insights (Preview)
+
+description: Improved quality of health care and Improved efficiency and cost-benefit, by reducing the time spent by healthcare professional
+++++ Last updated : 02/02/2023+++
+# What is Project Health Insights (Preview)?
+
+Project Health Insights is a Cognitive Service providing an API that serves insight models, which perform analysis and provide inferences to be used by a human. The models can receive input in different modalities, and return insight inferences including evidence as a result, for key high value scenarios in the health domain
+
+> [!IMPORTANT]
+> Project Health Insights is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Project Health Insights isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Project Health Insights.
+
+## Why use Project Health Insights?
+
+Health and Life Sciences organizations have multiple high-value business problems that require clinical insights inferences that are based on clinical data.
+Project Health Insights is a Cognitive Service that provides prebuilt models that assist with solving those business problems.
+
+## Available models
+
+There are currently two models available in Project Health Insights:
+
+The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria.
+
+The [Onco Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**.
++
+## Architecture
+
+![Diagram that shows Project Health Insights architecture.](media/architecture.png)
+
+Project Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```.
+With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco Phenotype.
+
+## Next steps
+
+Review the following information to learn how to deploy Project Health Insights and to learn additional information about each of the models:
+
+>[!div class="nextstepaction"]
+> [Deploy Project Health Insights](deploy-portal.md)
+
+>[!div class="nextstepaction"]
+> [Onco Phenotype](oncophenotype/overview.md)
+
+>[!div class="nextstepaction"]
+> [Trial Matcher](trial-matcher//overview.md)
azure-health-insights Request Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/request-info.md
+
+ Title: Project Health Insights request info
+description: this article describes the required properties to interact with Project Health Insights
+++++ Last updated : 02/17/2023+++
+# Project Health Insights request info
+
+This page describes the request models and parameters that are used to interact with Project Health Insights service.
+
+## Request
+The generic part of Project Health Insights request, common to all models.
+
+Name |Required|Type |Description
+--|--||--
+`patients`|yes |Patient[]|The list of patients, including their clinical information and data.
++
+## Patient
+A patient record, including their clinical information and data.
+
+Name|Required|Type |Description
+-|--||-
+`id` |yes |string |A given identifier for the patient. Has to be unique across all patients in a single request.
+`info`|no |PatientInfo |Patient structured information, including demographics and known structured clinical information.
+`data`|no |PatientDocument|Patient unstructured clinical data, given as documents.
+++
+## PatientInfo
+Patient structured information, including demographics and known structured clinical information.
+
+Name |Required|Type |Description
+|--|-|--
+`gender` |no |string |[ female, male, unspecified ]
+`birthDate` |no |string |The patient's date of birth.
+`clinicalInfo`|no |ClinicalCodeElement|A piece of clinical information, expressed as a code in a clinical coding system.
+
+## ClinicalCodeElement
+A piece of clinical information, expressed as a code in a clinical coding system.
+
+Name |Required|Type |Description
+|--||-
+`system`|yes |string|The clinical coding system, for example ICD-10, SNOMED-CT, UMLS.
+`code` |yes |string|The code within the given clinical coding system.
+`name` |no |string|The name of this coded concept in the coding system.
+`value` |no |string|A value associated with the code within the given clinical coding system.
++
+## PatientDocument
+A clinical unstructured document related to a patient.
+
+Name |Required|Type |Description
+|--||--
+`type ` |yes |string |[ note, fhirBundle, dicom, genomicSequencing ]
+`clinicalType` |no |string |[ consultation, dischargeSummary, historyAndPhysical, procedure, progress, imaging, laboratory, pathology ]
+`id` |yes |string |A given identifier for the document. Has to be unique across all documents for a single patient.
+`language` |no |string |A 2 letter ISO 639-1 representation of the language of the document.
+`createdDateTime`|no |string |The date and time when the document was created.
+`content` |yes |DocumentContent|The content of the patient document.
+
+## DocumentContent
+The content of the patient document.
+
+Name |Required|Type |Description
+-|--||-
+`sourceType`|yes |string|The type of the content's source.<br>If the source type is 'inline', the content is given as a string (for instance, text).<br>If the source type is 'reference', the content is given as a URI.[ inline, reference ]
+`value` |yes |string|The content of the document, given either inline (as a string) or as a reference (URI).
+
+## Next steps
+
+To get started using the service, you can
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Response Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/response-info.md
+
+ Title: Project Health Insights response info
+description: this article describes the response from the service
+++++ Last updated : 02/17/2023+++
+# Project Health Insights response info
+
+This page describes the response models and parameters that are returned by Project Health Insights service.
++
+## Response
+The generic part of Project Health Insights response, common to all models.
+
+Name |Required|Type |Description
+|--||
+`jobId` |yes |string|A processing job identifier.
+`createdDateTime` |yes |string|The date and time when the processing job was created.
+`expirationDateTime`|yes |string|The date and time when the processing job is set to expire.
+`lastUpdateDateTime`|yes |string|The date and time when the processing job was last updated.
+`status ` |yes |string|The status of the processing job. [ notStarted, running, succeeded, failed, partiallyCompleted ]
+`errors` |no |Error|An array of errors, if any errors occurred during the processing job.
+
+## Error
+
+Name |Required|Type |Description
+-|--|-|
+`code` |yes |string |Error code
+`message` |yes |string |A human-readable error message.
+`target` |no |string |Target of the particular error. (for example, the name of the property in error.)
+`details` |no |collection|A list of related errors that occurred during the request.
+`innererror`|no |object |An object containing more specific information about the error.
+
+## Next steps
+
+To get started using the service, you can
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Data Privacy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/responsible-ai/data-privacy-security.md
+
+ Title: Data, privacy, and security for Project Health Insights
+
+description: details regarding how Project Health Insights processes your data.
+++++ Last updated : 01/26/2023++++
+# Data, privacy, and security for Project Health Insights
+
+This article provides high level details regarding how Project Health Insights processes data provided by customers. As an important reminder, you're responsible for the implementation of your use case and are required to obtain all necessary permissions or other proprietary rights required to process the data you send to the system. It's your responsibility to comply with all applicable laws and regulations in your jurisdiction.
++
+## What data does it process and how?
+
+Project Health Insights:
+- processes text from the patient's clinical documents that are sent by the customer to the system for the purpose of inferring cancer attributes.
+- uses aggregate telemetry such as which APIs are used and the number of calls from each subscription and resource for service monitoring purposes.
+- doesn't store or process customer data outside the region where the customer deploys the service instance.
+- encrypts all content, including patient data, at rest.
++
+## How is data retained?
+
+- The input data sent to Project Health Insights is temporarily stored for up to 24 hours and is purged thereafter.
+- Project Health Insights response data is temporarily stored for 24 hours and is purged thereafter.
+- During requests' and responses, the data is encrypted and only accessible to authorized on-call engineers for service support, if there's a catastrophic failure. Should on-call engineers access this data, internal audit logs track these operations.
+- There are no customer controls available at this time.
+
+To learn more about Microsoft's privacy and security commitments, visit the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/faq.md
+
+ Title: Trial Matcher frequently asked questions
+
+description: Trial Matcher frequently asked questions
+++++ Last updated : 02/02/2023++++
+# Trial Matcher frequently asked questions
+
+YouΓÇÖll find answers to commonly asked questions about Trial Matcher, part of Project Health Insights service, in this article
+
+## Is there a workaround for patients whose clinical documents exceed the # characters limit?
+Unfortunately, we don't support patients with clinical documents that exceed # characters limit. You might try excluding the progress notes.
+
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md
+
+ Title: Using Trial Matcher
+
+description: This article describes how to use the Trial Matcher
+++++ Last updated : 01/27/2023++++
+# Quickstart: Use the Trial Matcher model
+
+This quickstart provides an overview on how to use the Trial Matcher.
+
+## Prerequisites
+To use Trial Matcher, you must have a Cognitive Services account created. If you haven't already created a Cognitive Services account, see [Deploy Project Health Insights using the Azure portal.](../deploy-portal.md)
+
+Once deployment is complete, you use the Azure portal to navigate to the newly created Cognitive Services account to see the details, including your Service URL. The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com/.
++
+## Submit a request and get results
+To send an API request, you need your Cognitive Services account endpoint and key.
+![Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png)
+
+> [!IMPORTANT]
+> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 1 hour from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval.
+
+### Example Request
+
+To submit a request to the Trial Matcher, you need to make a POST request to the endpoint.
+
+In the example below the patients are matches to the ```Clinicaltrials_gov``` source, for a ```lung cancer``` condition with facility locations for the city ```Orlando```.
+
+```http
+POST https://{your-cognitive-service-endpoint}/healthinsights/trialmatcher/jobs?api-version=2022-01-01-preview
+Content-Type: application/json
+Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
+{
+ "Configuration": {
+ "ClinicalTrials": {
+ "RegistryFilters": [
+ {
+ "Sources": [
+ "Clinicaltrials_gov"
+ ],
+ "Conditions": ["lung cancer"],
+ "facilityLocations": [
+ {
+ "State": "FL",
+ "City": "Orlando",
+ "Country": "United States"
+ }
+ ]
+ }
+ ]
+ },
+ "IncludeEvidence": false,
+ "Verbose": false
+ },
+ "Patients": [
+ {
+ "Info": {
+ "gender": "female",
+ "birthDate": "01/01/1987",
+ "ClinicalInfo": [
+
+ ]
+ },
+ "id": "12"
+ }
+ ]
+}
+
+```
++
+The response includes the operation-location in the response header. The value looks similar to the following URL:
+```https://eastus.api.cognitive.microsoft.com/healthinsights/trialmatcher/jobs/b58f3776-c6cb-4b19-a5a7-248a0d9481ff?api_version=2022-01-01-preview```
++
+### Example Response
+
+To get the results of the request, make the following GET request to the URL specified in the POST response operation-location header.
+```http
+GET https://{your-cognitive-service-endpoint}/healthinsights/trialmatcher/jobs/{job-id}?api-version=2022-01-01-preview
+Content-Type: application/json
+Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
+```
+
+An example response:
+
+```json
+{
+ "results": {
+ "patients": [
+ {
+ "id": "12",
+ "inferences": [
+ {
+ "type": "trialEligibility",
+ "id": "NCT03318939",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT03417882",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT02628067",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT04948554",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT04616924",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT04504916",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ {
+ "type": "trialEligibility",
+ "id": "NCT02635009",
+ "source": "clinicaltrials.gov",
+ "value": "Eligible"
+ },
+ ...
+ ],
+ "neededClinicalInfo": [
+ {
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "METASTATIC",
+ "name": "metastatic"
+ },
+ {
+ "semanticType": "T000",
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0032961",
+ "name": "Pregnancy"
+ },
+ {
+ "semanticType": "T000",
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C1512162",
+ "name": "Eastern Cooperative Oncology Group"
+ }
+ ]
+ }
+ ],
+ "modelVersion": "2022.03.24",
+ "knowledgeGraphLastUpdateDate": "2022.03.29"
+ },
+ "jobId": "26484d27-f5d7-4c74-a078-a359d1634a63",
+ "createdDateTime": "2022-04-04T16:56:00Z",
+ "expirationDateTime": "2022-04-04T17:56:00Z",
+ "lastUpdateDateTime": "2022-04-04T16:56:00Z",
+ "status": "succeeded"
+}
+```
++
+## Data limits
+
+**Limit** |**Value**
+-|
+Maximum # patients per request |1
+Maximum # trials per patient |5000
+Maximum # location filter per request|1
++
+## Next steps
+
+To get better insights into the request and responses, read more on the following pages:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
+
+>[!div class="nextstepaction"]
+> [Patient information](patient-info.md)
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/inferences.md
+
+ Title: Trial Matcher Inference information
+
+description: This article provides Trial Matcher inference information.
+++++ Last updated : 02/02/2023++++
+# Trial Matcher inference information
+
+The result of the Trial Matcher model includes a list of inferences made regarding the patient. For each trial that was queried for the patient, the model returns an indication of whether the patient appears eligible or ineligible for the trial. If the model concluded the patient is ineligible for a trial, it also provides a piece of evidence to support its conclusion (unless the ```evidence``` flag was set to false).
+
+## Example model result
+```json
+"inferences":[
+ {
+ "type":"trialEligibility",
+ "id":"NCT04140526",
+ "source":"clinicaltrials.gov",
+ "value":"Ineligible",
+ "confidenceScore":0.4
+ },
+ {
+ "type":"trialEligibility",
+ "id":"NCT04026412",
+ "source":"clinicaltrials.gov",
+ "value":"Eligible",
+ "confidenceScore":0.8
+ },
+ "..."
+]
+```
+
+## Next steps
+
+To get better insights into the request and responses, read more on the following pages:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
azure-health-insights Integration And Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/integration-and-responsible-use.md
+
+ Title: Guidance for integration and responsible use with Trial Matcher
+
+description: Microsoft wants to help you responsibly develop and deploy solutions that use Trial Matcher.
+++++ Last updated : 01/27/2023+++
+# Integration and responsible use with Trial Matcher
+
+As Microsoft works to help customers safely develop and deploy solutions using the Trial Matcher, we're taking a principled approach to upholding personal agency and dignity by considering the AI systems' fairness, reliability & safety, privacy & security, inclusiveness, transparency, and human accountability. These considerations are in line with our commitment to developing Responsible AI.
+
+## General guidelines
+
+When getting ready to integrate and use AI-powered products or features, the following activities help set you up for success:
+- **Understand what it can do**: Fully vet and review the capabilities of any AI model you're using to understand its capabilities and limitations.
+
+- **Test with real, diverse data**: Understand how your system will perform in your scenario by thoroughly testing it with real life conditions and data that reflects the diversity in your users, geography and deployment contexts. Small datasets, synthetic data and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
+
+- **Respect an individual's right to privacy**: Only collect data and information from individuals for lawful and justifiable purposes. Only use data and information that you have consent to use for this purpose.
+
+- **Legal review**: Obtain appropriate legal advice to review your solution, particularly if you will use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and your responsibility to resolve any issues that might come up in the future.
+
+- **System review**: If you're planning to integrate and responsibly use an AI-powered product or feature into an existing system of software, customers, and organizational processes, take the time to understand how each part of your system will be affected. Consider how your AI solution aligns with Microsoft's Responsible AI principles.
+
+- **Human in the loop**: Keep a human in the loop. This means ensuring constant human oversight of the AI-powered product or feature and maintaining the role of humans in decision-making. Ensure you can have real-time human intervention in the solution to prevent harm. It enables you to manage where the AI model doesn't perform as required.
+
+- **Security**: Ensure your solution is secure and has adequate controls to preserve the integrity of your content and prevent any unauthorized access.
+
+- **Customer feedback loop**: Provide a feedback channel that allows users and individuals to report issues with the service once it's been deployed. Once you've deployed an AI-powered product or feature it requires ongoing monitoring and improvement ΓÇô be ready to implement any feedback and suggestions for improvement.
++
+## Integration and responsible use for Patient Health Information (PHI)
+
+ - **Healthcare related data protections**: Healthcare data has special protections in various jurisdictions. Given the sensitive nature of health related data, make sure you know the regulations for your jurisdiction and take special care for security and data requirements when building your system. The Azure architecture center has [articles](/azure/architecture/example-scenario/data/azure-health-data-consortium) on storing health data and engineering compliance with HIPAA and HITRUST that you may find helpful.
+ - **Protecting PHI**: The health feature doesn't anonymize the data you send to the service. If your system presents the response from the system with the original data, you may want to consider appropriate measures to identify and remove these entities.
++
+## Learn more about Responsible AI
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/model-configuration.md
+
+ Title: Trial Matcher model configuration
+
+description: This article provides Trial Matcher model configuration information.
+++++ Last updated : 02/02/2023+++
+# Trial Matcher model configuration
+
+The Trial Matcher includes a built-in Knowledge graph, which uses trials taken from [clinicaltrials.gov](https://clinicaltrials.gov/), and is being updated periodically.
+
+When you're matching patients to trials, you can define a list of filters to query a subset of clinical trials. Each filter can be defined based on ```trial conditions```, ```types```, ```recruitment statuses```, ```sponsors```, ```phases```, ```purposes```, ```facility names```, ```locations```, or ```trial IDs```.
+- Specifying multiple values for the same filter category results in a trial set that is a union of the two sets.
++
+In the following configuration, the model queries trials that are in recruitment status ```recruiting``` or ```not yet recruiting```.
+
+```json
+"recruitmentStatuses": ["recruiting", "notYetRecruiting"]
+```
++
+- Specifying multiple filter categories results in a trial set that is the combination of the sets.
+In the following case, only trials for diabetes that are recruiting in Illinois are queried.
+Leaving a category empty will not limit the trials by that category.
+
+```json
+"registryFilters": [
+ {
+ "conditions": [
+ "Diabetes"
+ ],
+ "sources": [
+ "clinicaltrials.gov"
+ ],
+ "facilityLocations": [
+ {
+ "country": "United States",
+ "state": "IL"
+ }
+ ],
+ "recruitmentStatuses": [
+ "recruiting"
+ ]
+ }
+]
+```
+
+## Evidence
+Evidence is an indication of whether the modelΓÇÖs output should include evidence for the inferences. The default value is true. For each trial that the model concluded the patient is ineligible to, the model returns the relevant patient information and the eligibility criteria that were used to exclude the patient from the trial.
+
+```json
+{
+ "type": "trialEligibility",
+ "evidence": [
+ {
+ "eligibilityCriteriaEvidence": "Inclusion: Patient must have an Eastern Cooperative Oncology Group performance status of 0 or 1 The diagnosis of invasive adenocarcinoma of the breast must have been made by core needle biopsy.",
+ "patientInfoEvidence": {
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C1512162",
+ "name": "Eastern Cooperative Oncology Group",
+ "value": "2"
+ }
+ },
+ {
+ "eligibilityCriteriaEvidence": "Inclusion: Blood counts performed within 6 weeks prior to initiating chemotherapy must meet the following criteria: absolute neutrophil count must be greater than or equal 1200 / mm3 ;, platelet count must be greater than or equal 100,000 / mm3 ; and",
+ "patientInfoEvidence": {
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0032181",
+ "name": "Platelet Count measurement",
+ "value": "75000"
+ }
+ }
+ ],
+ "id": "NCT03412643",
+ "source": "clinicaltrials.gov",
+ "value": "Ineligible",
+}
+```
+
+## Verbose
+Verbose is an indication of whether the model should return trial information. The default value is false. If set to True, the model returns trial information including ```Title```, ```Phase```, ```Type```, ```Recruitment status```, ```Sponsors```, ```Contacts```, and ```Facilities```.
+
+If you use [gradual matching](./trial-matcher-modes.md), itΓÇÖs typically used in the last stage of the qualification process, before displaying trial results
++
+```json
+{
+ "type": "trialEligibility",
+ "id": "NCT03513939",
+ "source": "clinicaltrials.gov",
+ "metadata": {
+ "phases": [
+ "phase1",
+ "phase2"
+ ],
+ "studyType": "interventional",
+ "recruitmentStatus": "recruiting",
+ "sponsors": [
+ "Sernova Corp",
+ "CTI Clinical Trial and Consulting Services",
+ "Juvenile Diabetes Research Foundation",
+ "University of Chicago"
+ ],
+ "contacts": [
+ {
+ "name": "Frank, MD, PhD",
+ "email": "frank@surgery.uchicago.edu",
+ "phone": "999-702-2447"
+ }
+ ],
+ "facilities": [
+ {
+ "name": "University of Chicago Medical Center",
+ "city": "Chicago",
+ "state": "Illinois",
+ "country": "United States"
+ }
+ ]
+ },
+ "value": "Eligible",
+ "description": "A Safety, Tolerability and Efficacy Study of Sernova's Cell PouchΓäó for Clinical Islet Transplantation",
+}
+```
+++
+## Adding custom trials
+Trial Matcher can receive the eligibility criteria of a clinical trial in the format of a custom trial. The user of the service should provide the eligibility criteria section of the custom trial, as a text, in a format that is similar to the format of clinicaltrials.gov (same indentation and structure).
+A custom trial can be provided as a unique trial to match a patient to, as a list of custom trials, or as addition to clinicaltrials.gov knowledge graph.
+To provide a custom trial, the input to the Trial Matcher service should include ```ClinicalTrialRegisteryFilter.sources``` with value ```custom```.
+
+```json
+{
+ "Configuration":{
+ "ClinicalTrials":{
+ "CustomTrials":[
+ {
+ "Id":"CustomTrial1",
+ "EligibilityCriteriaText":"INCLUSION CRITERIA:\n\n 1. Patients diagnosed with Diabetes\n\n2. patients diagnosed with cancer\n\nEXCLUSION CRITERIA:\n\n1. patients with RET gene alteration\n\n 2. patients taking Aspirin\n\n3. patients treated with Chemotherapy\n\n",
+ "Demographics":{
+ "AcceptedGenders":[
+ "Female"
+ ],
+ "AcceptedAgeRange":{
+ "MinimumAge":{
+ "Unit":"Years",
+ "Value":0
+ },
+ "MaximumAge":{
+ "Unit":"Years",
+ "Value":100
+ }
+ }
+ },
+ "Metadata":{
+ "Phases":[
+ "Phase1"
+ ],
+ "StudyType":"Interventional",
+ "RecruitmentStatus":"Recruiting",
+ "Conditions":[
+ "Diabetes"
+ ],
+ "Sponsors":[
+ "sponsor1",
+ "sponsor2"
+ ],
+ "Contacts":[
+ {
+ "Name":"contact1",
+ "Email":"email1",
+ "Phone":"01"
+ },
+ {
+ "Name":"contact2",
+ "Email":"email2",
+ "Phone":"03"
+ }
+ ]
+ }
+ }
+ ]
+ },
+ "Verbose":true,
+ "IncludeEvidence":true
+ },
+ "Patients":[
+ {
+ "Id":"Patient1",
+ "Info":{
+ "Gender":"Female",
+ "BirthDate":"2002-07-19T10:58:02.7500649+00:00",
+ "ClinicalInfo":[
+ {
+ "System":"http://www.nlm.nih.gov/research/umls",
+ "Code":"C0011849",
+ "Name":"Diabetes",
+ "Value":"True;EntityType:DIAGNOSIS"
+ },
+ {
+ "System":"http://www.nlm.nih.gov/research/umls",
+ "Code":"C0004057",
+ "Name":"aspirin",
+ "Value":"False;EntityType:MedicationName"
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+To get started using the Trial Matcher model, refer to
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/overview.md
+
+ Title: What is Trial Matcher (Preview)
+
+description: Trial Matcher is designed to match patients to potentially suitable clinical trials and find group of potentially eligible patients to a list of clinical trials.
+++++ Last updated : 01/27/2023+++
+# What is Trial Matcher (Preview)?
+
+The Trial Matcher is an AI model, offered within the context of the broader Project Health Insights. Trial Matcher is designed to match patients to potentially suitable clinical trials or find a group of potentially eligible patients to a list of clinical trials.
+
+- Trial Matcher receives a list of patients, including their relevant health information and trial configuration. Then it returns a list of inferences ΓÇô whether the patient appears eligible or not eligible for each trial.
+- When a patient appears to be ineligible for a trial, the model provides evidence to support its conclusion.
+- In addition to inferences, the model also indicates if any necessary clinical information required to qualify patients for trials has not yet been provided by the patient. This can be sent back to the model to continue the qualification process for more accurate matching.
+
+## Two different modes
+
+Trial Matcher provides the user of the services two main modes of operation: **patient centric** and **clinical trial centric**.
+
+- On **patient centric** mode, the Trial Matcher model bases the patient matching on the clinical condition, location, priorities, eligibility criteria, and other criteria that the patient and/or service users may choose to prioritize. The model helps narrow down and prioritize the set of relevant clinical trials to a smaller set of trials to start with, that the specific patient appears to be qualified for.
+- On **clinical trial centric**, the Trial Matcher is finding a group of patients potentially eligible to a clinical trial. The Trial Matcher narrows down the patients, first filtered on clinical condition and selected clinical observations, and then focuses on those patients who met the baseline criteria, to find the group of patients that appears to be eligible patients to a trial.
+
+## Trial information and eligibility
+
+The Trial Matcher uses trial information and eligibility criteria from [clinicaltrials.gov](https://clinicaltrials.gov/). Trial information is updated on a periodic basis. In addition, the Trial Matcher can receive custom trial information and eligibility criteria that were provided by the service user, in case a trial isn't yet published in [clinicaltrials.gov](https://clinicaltrials.gov/).
++
+> [!IMPORTANT]
+> Trial Matcher is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ Trial Matcher isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Trial Matcher.
++
+## Azure Health Bot Integration
+
+Trial Matcher comes with a template for the [Azure Health Bot](/azure/health-bot/), a service that creates virtual assistants for healthcare. It can communicate with Trial Matcher to help users match to clinical trials using a conversational mechanism.
+
+- The Azure Health Bot template includes LUIS language model and a resource file that integrates Trial Matcher with Azure Health Bot and demonstrates how to use it.
+- The template also includes example scenarios and specific steps to send custom telemetry events to Application Insights. This enables customers to produce analytics and get insights on usage.
+- Customers can completely customize the Health Bot scenarios and localize the strings into any language.
+Contact the product team to get the Trial Matcher template for the Azure Health Bot.ΓÇ¥
+
+> **Contact the product team above to the contact information.**
+++
+## Language support
+
+Trial Matcher currently supports the English language.
+
+## Limits and quotas
+For the public preview, you can select the F0 (free) sku.
+The official prices will be released after public preview.
+
+## Next steps
+
+To get started using the Trial Matcher:
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/patient-info.md
+
+ Title: Trial Matcher patient info
+
+description: This article describes how and which patient information can be sent to the Trial Matcher
+++++ Last updated : 02/02/2023++++
+# Trial Matcher patient info
+
+Trial Matcher uses patient information to match relevant patient(s) with the clinical trial(s). You can provide the information in four different ways:
+
+- Unstructured clinical notes
+- FHIR bundles
+- gradual Matching (question and answer)
+- JSON key/value
+
+## Unstructured clinical note
+
+Patient data can be provided to the Trial Matcher as an unstructured clinical note.
+The Trial Matcher performs a prior step of language understanding to analyze the unstructured text, retrieves the patient clinical information, and builds the patient data into structured data.
+
+When providing patient data in clinical notes, use ```note``` value for ```Patient.PatientDocument.type```.
+Currently, Trial Matcher only supports one clinical note per patient.
+
+The following example shows how to provide patient information as an unstructured clinical note:
+
+```json
+{
+ "configuration":{
+ "clinicalTrials":{
+ "registryFilters":[
+ {
+ "conditions":[
+ "Cancer"
+ ],
+ "sources":[
+ "clinicaltrials.gov"
+ ],
+ "facilityLocations":[
+ {
+ "state":"IL",
+ "country":"United States"
+ }
+ ]
+ }
+ ]
+ },
+ "verbose":true,
+ "includeEvidence":true
+ },
+ "patients":[
+ {
+ "id":"patient_1",
+ "info":{
+ "gender":"Male",
+ "birthDate":"2000-03-17",
+ "clinicalInfo":[
+ {
+ "system":"http://www.nlm.nih.gov/research/umls",
+ "code":"C0006826",
+ "name":"MalignantNeoplasms",
+ "value":"true"
+ }
+ ]
+ },
+ "data":[
+ {
+ "type":"Note",
+ "clinicalType":"Consultation",
+ "id":"12-consult_15",
+ "content":{
+ "sourceType":"Inline",
+ "value":"TITLE: Cardiology Consult\r\n DIVISION OF CARDIOLOGY\r\n COMPREHENSIVE CONSULTATION NOTE\r\nCHIEF COMPLAINT: Patient is seen in consultation today at the\r\nrequest of Dr. [**Last Name (STitle) 13959**]. We are asked to give consultative advice\r\nregarding evaluation and management of Acute CHF.\r\nHISTORY OF PRESENT ILLNESS:\r\n71 year old man with CAD w\/ diastolic dysfunction, CKD, Renal\r\nCell CA s\/p left nephrectomy, CLL, known lung masses and recent\r\nbrochial artery bleed, s\/p embolization of LLL bronchial artery\r\n[**1-17**], readmitted with hemoptysis on [**2120-2-3**] from [**Hospital 328**] [**Hospital 9250**]\r\ntransferred from BMT floor following second episode of hypoxic\r\nrespiratory failure, HTN and tachycardia in 3 days. Per report,\r\non the evening of transfer to the [**Hospital Unit Name 1**], patient continued to\r\nremain tachypnic in upper 30s and was receiving IVF NS at\r\n100cc\/hr for concern of hypovolemic hypernatremia. He also had\r\nreceived 1unit PRBCs with temp rise for 98.3 to 100.4, he was\r\ncultured at that time, and transfusion rxn work up was initiated.\r\nAt around 5:30am, he was found to be newly hypertensive with SBP\r\n>200 with a regular tachycardia to 160 with new hypoxia requiring\r\nshovel mask. He received 1mg IV ativan, 1mg morphine, lasix 40mg\r\nIV x1, and lopressor 5mg IV. ABG 7.20\/63\/61 on shovel mask. "
+ }
+ }
+ ]
+ }
+ ]
+}
+ ```
+
+## FHIR bundles
+Patient data can be provided to the Trial Matcher as a FHIR bundle. Patient data in FHIR bundle format can either be retrieved from a FHIR Server or from an EMR/EHR system that provides a FHIR interface.
+
+Trial Matcher supports USCore profiles and mCode profiles.
+
+When providing patient data as a FHIR Bundle, use ```fhirBundle``` value for ```Patient.PatientDocument.type```.
+The value of the ```fhirBundle``` should be provided as a reference with the content, including the reference URI.
+
+The following example shows how to provide patient information as a FHIR Bundle:
+
+ ```json
+{
+ "configuration": {
+ "clinicalTrials": {
+ "registryFilters": [
+ {
+ "conditions": [
+ "Cancer"
+ ],
+ "phases": [
+ "phase1"
+ ],
+ "sources": [
+ "clinicaltrials.gov"
+ ],
+ "facilityLocations": [
+ {
+ "state": "CA",
+ "country": "United States"
+ }
+ ]
+ }
+ ]
+ },
+ "verbose": true,
+ "includeEvidence": true
+ },
+ "patients": [
+ {
+ "id": "patient_1",
+ "info": {
+ "gender": "Female",
+ "birthDate": "2000-03-17"
+ },
+ "data": [
+ {
+ "type": "FhirBundle",
+ "clinicalType": "Consultation",
+ "id": "Consultation-14-Demo",
+ "content": {
+ "sourceType": "Inline",
+ "value": "{\"resourceType\":\"Bundle\",\"id\":\"1ca45d61-eb04-4c7d-9784-05e31e03e3c6\",\"meta\":{\"profile\":[\"http://hl7.org/fhir/4.0.1/StructureDefinition/Bundle\"]},\"identifier\":{\"system\":\"urn:ietf:rfc:3986\",\"value\":\"urn:uuid:1ca45d61-eb04-4c7d-9784-05e31e03e3c6\"},\"type\":\"document\",\"entry\":[{\"fullUrl\":\"Composition/baff5da4-0b29-4a57-906d-0e23d6d49eea\",\"resource\":{\"resourceType\":\"Composition\",\"id\":\"baff5da4-0b29-4a57-906d-0e23d6d49eea\",\"status\":\"final\",\"type\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"11488-4\",\"display\":\"Consult note\"}],\"text\":\"Consult note\"},\"subject\":{\"reference\":\"Patient/894a042e-625c-48b3-a710-759e09454897\",\"type\":\"Patient\"},\"encounter\":{\"reference\":\"Encounter/d6535404-17da-4282-82c2-2eb7b9b86a47\",\"type\":\"Encounter\",\"display\":\"unknown\"},\"date\":\"2022-08-16\",\"author\":[{\"reference\":\"Practitioner/082e9fc4-7483-4ef8-b83d-ea0733859cdc\",\"type\":\"Practitioner\",\"display\":\"Unknown\"}],\"title\":\"Consult note\",\"section\":[{\"title\":\"Chief Complaint\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"46239-0\",\"display\":\"Reason for visit and chief complaint\"}],\"text\":\"Chief Complaint\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>Chief Complaint</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>\\\"swelling of tongue and difficulty breathing and swallowing\\\"</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/a7ba1fc8-7544-4f1a-ac4e-c0430159001f\",\"type\":\"List\",\"display\":\"Chief Complaint\"}]},{\"title\":\"History of Present Illness\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"10164-2\",\"display\":\"History of present illness\"}],\"text\":\"History of Present Illness\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>History of Present Illness</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>77 y o woman in NAD with a h/o CAD, DM2, asthma and HTN on altace for 8 years awoke from sleep around 2:30 am this morning of a sore throat and swelling of tongue. She came immediately to the ED b/c she was having difficulty swallowing and some trouble breathing due to obstruction caused by the swelling. She has never had a similar reaction ever before and she did not have any associated SOB, chest pain, itching, or nausea. She has not noticed any rashes, and has been afebrile. She says that she feels like it is swollen down in her esophagus as well. In the ED she was given 25mg benadryl IV, 125 mg solumedrol IV and pepcid 20 mg IV. This has helped the swelling some but her throat still hurts and it hurts to swallow. Nothing else was able to relieve the pain and nothing make it worse though she has not tried to drink any fluids because of trouble swallowing. She denies any recent travel, recent exposure to unusual plants or animals or other allergens. She has not started any new medications, has not used any new lotions or perfumes and has not eaten any unusual foods. Patient has not taken any of her oral medications today.</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/c1c10373-6325-4339-b962-c3c114969ccd\",\"type\":\"List\",\"display\":\"History of Present Illness\"}]},{\"title\":\"Surgical History\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"10164-2\",\"display\":\"History of present illness\"}],\"text\":\"Surgical History\"},\"text\":{\"div\":\"<div>\\r\\n\\t\\t\\t\\t\\t\\t\\t<h1>Surgical History</h1>\\r\\n\\t\\t\\t\\t\\t\\t\\t<p>s/p Cardiac stent in 1999 \\r\\ns/p hystarectomy in 1970s \\r\\ns/p kidney stone retrieval 1960s</p>\\r\\n\\t\\t\\t\\t\\t</div>\"},\"entry\":[{\"reference\":\"List/1d5dcbe4-7206-4a27-b3a8-52e4d30dacfe\",\"type\":\"List\",\"display\":\"Surgical History\"}]},{\"title\":\"Medical History\",\"code\":{\"coding\":[{\"system\":\"http://loinc.org\",\"code\":\"11348-0\",\"display\":\"Past medical
+ ...."
+ }
+ }
+ ]
+ }
+ ]
+}
+
+ ```
+
+## Gradual Matching
+
+Trial Matcher can also be used with gradual Matching. In this mode, you can send requests to the Trial Matcher in a gradual way. This is done via conversational intelligence or chat-like scenarios.
+
+The gradual Matching uses patient information for matching, including demographics (gender and birthdate) and structured clinical information. When sending clinical information via gradual matching, itΓÇÖs passed as a list of ```clinicalCodedElements```. Each one is expressed in a clinical coding system as a code thatΓÇÖs extended by semantic information and value
+
+### Differentiating concepts
+
+Other clinical information is derived from the eligibility criteria found in the subset of trials within the query. The model selects **up to three** most differentiating concepts, that is, that helps the most in qualifying the patient. The model will only indicate concepts that appear in trials and won't suggest collecting information that isn't required and won't help in qualification.
+
+When you match potential eligible patients to a clinical trial, the same concept of needed clinical info will need to be provided.
+In this case, the three most differentiating concepts for the clinical trial provided are selected.
+In case more than one trial was provided, three concepts for all the clinical trials provided are selected.
+
+- Customers are expected to use the provided ```UMLSConceptsMapping.json``` file to map each selected concept with the expected answer type. Customers can also use the suggested question text to generate questions to users. Question text can also be edited and/or localized by customers.
+
+- When you send patient information back to the Trial Matcher, you can also send a ```null``` value to any concept.
+This instructs the Trial Matcher to skip that concept, ignore it in patient qualification and instead send the next differentiating concept in the response.
+
+> [!IMPORTANT]
+> Typically, when using gradual Matching, the first request to the Trial Matcher will include a list of ```registryFilters``` based on customer configuration and user responses (e.g. condition and location). The response to the initial request will include a list of trial ```ids```. To improve performance and reduce latency, the trial ```ids``` should be used in consecutive requests directly (utilizing the ```ids``` registryFilter), instead of the original ```registryFilters``` that were used.
++
+## Category concepts
+There are five different categories that are used as concepts:
+- UMLS concept ID that represents a single concept
+- UMLS concept ID that represents multiple related concepts
+- Textual concepts
+- Entity types
+- Semantic types
++
+### 1. UMLS concept ID that represents a single concept
+
+Each concept in this category is represented by a unique UMLS ID. The expected answer types can be Boolean, Numeric, or from a defined Choice set.
+
+Example concept from neededClinicalInfo API response:
+
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C1512162",
+ "name": "Eastern Cooperative Oncology Group"
+}
+```
+
+Example mapping for the above concept from UMLSConceptsMapping.json:
+```json
+"C1512162": {
+ "codes": "C1512162;C1520224",
+ "name": "ECOG",
+ "choices": [ "0", "1", "2", "3", "4" ],
+ "question": "What is the patient's ECOG score?",
+ "answerType": "Choice"
+}
+```
+
+Example value sent to Trial Matcher for the above category:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C1512162",
+ "name": "Eastern Cooperative Oncology Group",
+ "value": "2"
+}
+```
+
+### 2. UMLS concept ID that represents multiple related concepts
+
+Certain UMLS concept IDs can represent multiple related concepts, which are typically displayed to the user as a multi-choice question, such as mental health related concepts, or TNM staging.
+In this category, answers are expected to include multiple codes and values, one for each concept that is part of the related concepts.
+
+Example concept from neededClinicalInfo API response:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": " C0475284",
+ "name": "TNM tumor staging system "
+}
+```
+
+Example mapping for the above concept from UMLSConceptsMapping.json:
+```json
+"C0475284": {
+ "codes": "C0475284",
+ "name": "TNM tumor staging system",
+ "question": "If the patient was diagnosed with cancer, what is the patient's TNM stage?",
+ "answerType": "MultiChoice",
+ "multiChoice": {
+ "C0475455": {
+ "codes": "C0475455",
+ "name": "T (Tumor)",
+ "answerType": "Choice",
+ "choices": [ "x", "0", "is", "1", "1a", "1b", "1c", "2", "2a", "2b", "2c", "3", "3a", "3b", "3c", "4", "4a", "4b", "4c" ]
+ },
+ "C0456532": {
+ "codes": "C0456532",
+ "name": "N (Lymph nodes)",
+ "answerType": "Choice",
+ "choices": [ "x", "0", "1", "1a", "1b", "1c", "2", "2a", "2b", "2c", "3", "3a", "3b", "3c" ]
+ },
+ "C0456533": {
+ "codes": "C0456533",
+ "name": "M (Metastases)",
+ "answerType": "Choice",
+ "choices": [ "x", "0", "1", "1a", "1b", "1c" ]
+ }
+ }
+}
+```
+
+Example values sent to Trial Matcher for the above category:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0475455",
+ "name": "T (Tumor)",
+ "value": "1a"
+},
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0456532",
+ "name": "N (Lymph nodes)",
+ "value": "1a"
+},
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0456533",
+ "name": "M (Metastases)",
+ "value": "1"
+}
+```
+
+### 3. Textual concepts
+
+Textual concepts are concepts in which the code is a string, instead of a UMLS code. These are typically used to identify disease morphology and behavioral characteristics.
+
+Example concept from neededClinicalInfo API response:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "NONINVASIVE",
+ "name": "noninvasive;non invasive"
+}
+```
+
+Example mapping for the above concept from UMLSConceptsMapping.json:
+```json
+"NONINVASIVE": {
+ "codes": "noninvasive",
+ "name": "noninvasive;non invasive",
+ "question": "Was the patient diagnosed with a %p1% disease?",
+ "answerType": "Boolean"
+}
+```
+
+Example value sent to Trial Matcher for the above concept:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "NONINVASIVE",
+ "name": "noninvasive;non invasive",
+ "value": "true"
+}
+```
++
+### 4. Entity types
+Entity type concepts are concepts that are grouped by common entity types, such as medications, genomic and biomarker information.
+
+When entity type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, customers are expected to concatenate the entity type string to the value, separated with a semicolon.
+
+Example concept from neededClinicalInfo API response:
+```json
+{
+ "category": "GENEORPROTEIN-VARIANT",
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": " C1414313",
+ "name": " EGFR gene ",
+ "value": "EntityType:GENEORPROTEIN-VARIANT"
+}
+```
+
+Example mapping for the above category from UMLSConceptsMapping.json:
+```json
+"GENEORPROTEIN-VARIANT": {
+ "codes": "GeneOrProtein-Variant;GeneOrProtein-MutationType",
+ "question": "Does the patient carry %p1% mutation/abnormality?",
+ "name": "GeneOrProtein-Variant",
+ "answerType": "Boolean"
+}
+```
+
+Example value sent to Trial Matcher for the above category:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": " C1414313",
+ "name": "EGFR gene",
+ "value": "true;GENEORPROTEIN-VARIANT"
+}
+```
+
+### 5. Semantic types
+Semantic type concepts are another category of concepts, grouped together by the semantic type of entities. When semantic type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, thereΓÇÖs no need to concatenate the entity or semantic type of the entity to the value.
+
+Example concept from neededClinicalInfo API response:
+```json
+{
+ "category": "DIAGNOSIS",
+ "semanticType": "T047",
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0014130",
+ "name": "Endocrine System Diseases",
+ "value": "EntityType:DIAGNOSIS"
+}
+```
+
+Example mapping for the above category from UMLSConceptsMapping.json:
+```json
+"DIAGNOSIS,T047": {
+ "name": "Diagnosis X Disease or Syndrome",
+ "question": "Was the patient diagnosed with %p1%?",
+ "answerType": "Boolean"
+}
+```
+
+Example value sent to Trial Matcher for the above category:
+```json
+{
+ "system": "http://www.nlm.nih.gov/research/umls",
+ "code": "C0014130",
+ "name": "Endocrine System Diseases",
+ "value": "false"
+}
+```
++
+## Next steps
+
+To get started using the Trial Matcher model:
+
+>[!div class="nextstepaction"]
+> [Get started using the Trial Matcher model](./get-started.md)
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/support-and-help.md
+
+ Title: Trial Matcher support and help options
+
+description: How to obtain help and support for questions and problems when you create applications that use with Trial Matcher
+++++ Last updated : 02/02/2023++++
+# Trial Matcher support and help options
+
+Are you just starting to explore the functionality of the Trial Matcher model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for the Trial Matcher model.
+
+## Create an Azure support request
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal for the United States government](https://portal.azure.us)
++
+## Post a question on Microsoft Q&A
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/transparency-note.md
+
+ Title: Transparency Note for Trial Matcher
+
+description: Microsoft's Transparency Notes for Trial Matcher are intended to help you understand how our AI technology works.
+++++ Last updated : 01/27/2023++++
+# Transparency Note for Trial Matcher
+
+An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, its capabilities and limitations, and how to achieve the best performance.
+
+Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. They are also part of a broader effort at Microsoft to put our AI principles into practice. To find out more, see [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai).
+
+## Example use cases for the Trial Matcher
+
+**Use case** | **Description**
+-|-
+Assisted annotation and curation | Support solutions for clinical data annotation and curation. For example: to support clinical coding, digitization of data that was manually created, automation of registry reporting.
+Decision support | Enable solutions that provide information that can assist a human in their work or support a decision made by a human.
+
+## Considerations when choosing a use case
+
+Given the sensitive nature of health-related data, it's important to consider your use cases carefully. In all cases, a human should be making decisions, assisted by the information the system returns and there should be a way to review the source data and correct errors.
+
+## Don't use
+ - **Don't use for scenarios that use this service as a medical device, clinical support, or diagnostic tools to be used in the diagnosis, cure, mitigation, treatment or prevention of disease or other conditions without a human intervention.** A qualified medical professional should always do due diligence and verify the source data regarding patient care decisions.
+ - **Don't use for scenarios that use personal health information without appropriate consent.** Health information has special protections that may require explicit consent for certain use. Make sure you have appropriate consent to use health data.
azure-health-insights Trial Matcher Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/trial-matcher-modes.md
+
+ Title: Trial Matcher modes
+
+description: This article explains the different modes of Trial Matcher
+++++ Last updated : 01/27/2023+++
+# Trial Matcher modes
+
+Trial Matcher provides two main modes of operation to users of the service: a **patient centric** mode and a **clinical trial centric** mode.
+
+On the diagram, you can see how patients' or clinical trials can be found through the two different modes.
+![Diagram that shows the Trial Matcher operation modes.](../media/trial-matcher/overview.png)
++
+## Patient centric
+
+**Patient centric** is when the Trial Matcher model matches a single patient to a set of relevant clinical trials, the patient appears to be qualified for. Patient centric is also known as **one-to-many** use case.
+
+The Trial Matcher logic is based on the patient **clinical health information**, **location**, **priorities**, **trial eligibility criteria**, and **other criteria** that the patient and/or service users may choose to prioritize.
+
+Typically, when using Trial Matcher in **patient centric** the service user provides the patient data in one of the following data formats:
+- Gradual matching
+- Key-Value structure
+- FHIR bundle
+- Unstructured clinical note
++
+### Gradual matching
+Trial Matcher can be used to match patients with known structured medical information, or it can be used to collect the required medical information during the qualification process, which is known as Gradual matching.
+
+Gradual matching can be utilized through any client application. One common implementation is by using the [Azure Health Bot](/azure/health-bot/) to create a conversational mechanism for collecting information and qualifying patients.
+
+When performing gradual matching, the response of each call to the Trial Matcher includes the needed [clinical info](patient-info.md) ΓÇô health information derived from the subset of clinical trials found that is required to qualify the patient. This information should be captured from the user (for example, by generating a question and waiting for user input) and sent back to the Trial Matcher in the following request, to perform a more accurate qualification.
+++
+## Clinical trial centric
+
+**Clinical trial centric** is when the Trial Matcher model finds potentially eligible group of patients to a clinical trial.
+The user should provide patient data and the relevant clinical trials to match against. The Trial Matcher then analyzes the data and provides the results per patient, both if they're eligible or ineligible.
+
+Clinical Trial Centric is also known as **many-to-one** use case, and the extension of it's **many-to-many** when there's a list of clinical trials to match the patients to.
+The process of matching patients is typically done in two phases.
+- The first phase, done by the service user, starts with all patients in the data repository. The goal is to match all patients that meet a baseline criteria, like a clinical condition.
+- In the second phase, the service user uses the Trial Matcher to input a subset group of patients (the outcome of the first phase) to match only those patients to the detailed exclusion and inclusion criteria of a clinical trial.
+
+Typically, when using Trial Matcher in clinical trial centric the service user provides the patient data in one of the following data formats:
+- Key-Value structure
+- FHIR bundle
+- Unstructured clinical note
++
+## Next steps
+
+For more information, see
+
+>[!div class="nextstepaction"]
+> [Patient info](patient-info.md)
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Title: Facility Ontology in Microsoft Azure Maps Creator description: Facility Ontology that describes the feature class definitions for Azure Maps Creator--++ Last updated 02/17/2023
Facility ontology defines how Azure Maps Creator internally stores facility data
:::zone pivot="facility-ontology-v1"
-The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps).
+The Facility 1.0 contains revisions for the Facility feature class definitions for [Azure Maps services].
:::zone-end :::zone pivot="facility-ontology-v2"
-The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps services](https://aka.ms/AzureMaps).
+The Facility 2.0 contains revisions for the Facility feature class definitions for [Azure Maps services].
:::zone-end
When importing a drawing package into Azure Maps Creator, these fields are autom
# [GeoJSON package (preview)](#tab/geojson)
-Support for creating a [dataset][datasetv20220901] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator.
+Support for creating a [dataset] from a GeoJSON package is now available as a new feature in preview in Azure Maps Creator.
-When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object][feature object] in each GeoJSON file in the package.
+When importing a GeoJSON package, the `ID` and `Geometry` fields must be supplied with each [feature object] in each GeoJSON file in the package.
| Property | Type | Required | Description | |-|--|-|-|
-|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object][GeometryObject] in the GeoJSON (RFC 7946) format specification. |
+|`Geometry` | object | true | Each Geometry object consists of a `type` and `coordinates` array. While a required field, the value can be set to `null`. For more information, see [Geometry Object] in the GeoJSON (RFC 7946) format specification. |
|`ID` | string | true | The value of this field can be alphanumeric characters (0-9, a-z, A-Z), dots (.), hyphens (-) and underscores (_). Maximum length allowed is 1,000 characters.| :::image type="content" source="./media/creator-indoor-maps/geojson.png" alt-text="A screenshot showing the geometry and ID fields in a GeoJSON file.":::
-For more information, see [Create a dataset using a GeoJson package](how-to-dataset-geojson.md).
+For more information, see [Create a dataset using a GeoJson package].
The `unit` feature class defines a physical and non-overlapping area that can be
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures] don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`]. By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`] or [`areaElement`] with an `isObstruction` property equal to `true`.|
|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is assumed to be traversable by any navigating agent. | |`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. | |`routeThroughBehavior` | enum ["disallowed", "allowed", "preferred"] | false | Determines if navigating through the unit is allowed. If unspecified, it inherits its value from the category feature referred to in the `categoryId` property. If specified, it overrides the value given in its category feature." | |`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo) | true | Room/Unit/Apartment/Suite number of the unit.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id] | false | The ID of a [directoryInfo] feature. Used to represent the address of the feature.|
+|`addressRoomNumber` | [directoryInfo.Id] | true | Room/Unit/Apartment/Suite number of the unit.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `unit` feature class defines a physical and non-overlapping area that can be
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
+|`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures] don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`]. By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`] or [`areaElement`] with an `isObstruction` property equal to `true`.|
|`isRoutable` | boolean (Default value is `null`.) | false | Determines if the unit is part of the routing graph. If set to `true`, the unit can be used as source/destination or intermediate node in the routing experience. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | false | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
+|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id] | false | The ID of a [directoryInfo] feature. Used to represent the address of the feature.|
|`addressRoomNumber` | string | false | Room/Unit/Apartment/Suite number of the unit. Maximum length allowed is 1,000 characters.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `structure` feature class defines a physical and non-overlapping area that c
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
+| `levelId` | [level.Id] | true | The ID of a [`level`] feature. |
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `zone` feature class defines a virtual area, like a WiFi zone or emergency a
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+| `levelId` | [level.Id] | true | The ID of a [`level`] feature. |
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `zone` feature class defines a virtual area, like a WiFi zone or emergency a
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
| `setId` | string | true |Required for zone features that represent multi-level zones. The `setId` is the unique ID for a zone that spans multiple levels. The `setId` enables a zone with varying coverage on different floors to be represented with different geometry on different levels. The `setId` can be any string and is case-sensitive. It's recommended that the `setId` is a GUID. Maximum length allowed is 1,000 characters.|
-| `levelId` | [level.Id](#level) | true | The ID of a [`level`](#level) feature. |
+| `levelId` | [level.Id] | true | The ID of a [`level`] feature. |
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end ## level
-The `level` class feature defines an area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`](#unit).
+The `level` class feature defines an area of a building at a set elevation. For example, the floor of a building, which contains a set of features, such as [`units`].
**Geometry Type**: MultiPolygon
The `level` class feature defines an area of a building at a set elevation. For
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
-| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+|`facilityId` | [facility.Id] |true | The ID of a [`facility`] feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`] feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. |
-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
-| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`], in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`].|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `level` class feature defines an area of a building at a set elevation. For
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.|
-| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
+|`facilityId` | [facility.Id] |true | The ID of a [`facility`] feature.|
+| `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`] feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button.|
-| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. |
-| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).|
+| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`], in meters. |
+| `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`].|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `facility` feature class defines the area of the site, building footprint, a
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
+|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id] | true | The ID of a [directoryInfo] feature. Used to represent the address of the feature.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.| |`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
The `facility` feature class defines the area of the site, building footprint, a
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-|`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. |
-|`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
+|`occupants` | array of [directoryInfo.Id] | false | The IDs of [directoryInfo] features. Used to represent one or many occupants in the feature. |
+|`addressId` | [directoryInfo.Id] | true | The ID of a [directoryInfo] feature. Used to represent the address of the feature.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
|`anchorHeightAboveSeaLevel` | double | false | Height of anchor point above sea level, in meters. Sea level is defined by EGM 2008.| |`defaultLevelVerticalExtent` | double| false | Default value for vertical extent of levels, in meters.|
The `verticalPenetration` class feature defines an area that, when used in a set
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are considered to be the same. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`] feature is used to determine the low and high order.|
|`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. | |`nonPublic` | boolean| false | If `true`, the unit is navigable only by privileged users. Default value is `false`. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `verticalPenetration` class feature defines an area that, when used in a set
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.|
+|`categoryId` | [category.Id] |true | The ID of a [`category`] feature.|
| `setId` | string | true | Vertical penetration features must be used in sets to connect multiple levels. Vertical penetration features in the same set are connected. The `setId` can be any string, and is case-sensitive. Using a GUID as a `setId` is recommended. Maximum length allowed is 1,000 characters. |
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`](#level) feature is used to determine the low and high order.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
+|`direction` | string enum [ "both", "lowToHigh", "highToLow", "closed" ]| false | Travel direction allowed on this feature. The ordinal attribute on the [`level`] feature is used to determine the low and high order.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
+|`categoryId` |[category.Id] |true | The ID of a category feature.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
| `isConnectedToVerticalPenetration` | boolean | false | Whether or not this feature is connected to a `verticalPenetration` feature on one of its sides. Default value is `false`. | |`navigableBy` | enum ["pedestrian", "wheelchair", "machine", "bicycle", "automobile", "hiredAuto", "bus", "railcar", "emergency", "ferry", "boat"] | false |Indicates the types of navigating agents that can traverse the unit. If unspecified, the unit is traversable by any navigating agent. | | `accessRightToLeft`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from right to left. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `accessLeftToRight`| enum [ "prohibited", "digitalKey", "physicalKey", "keyPad", "guard", "ticket", "fingerprint", "retina", "voice", "face", "palm", "iris", "signature", "handGeometry", "time", "ticketChecker", "other"] | false | Method of access when passing through the opening from left to right. Left and right are determined by the vertices in the feature geometry, standing at the first vertex and facing the second vertex. Omitting this property means there are no access restrictions.| | `isEmergency` | boolean | false | If `true`, the opening is navigable only during emergencies. Default value is `false` |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] y that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] y that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `opening` class feature defines a traversable boundary between two units, or
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a category feature.|
-| `levelId` | [level.Id](#level) | true | The ID of a level feature. |
-|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`categoryId` |[category.Id] |true | The ID of a category feature.|
+| `levelId` | [level.Id] | true | The ID of a level feature. |
+|`anchorPoint` |[Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `directoryInfo` object class feature defines the name, address, phone number
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
The `directoryInfo` object class feature defines the name, address, phone number
|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. | |`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. | |`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification]. Maximum length allowed is 1,000 characters. |
:::zone-end
The `directoryInfo` object class feature defines the name, address, phone number
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1,000 characters. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1,000 characters. |
The `directoryInfo` object class feature defines the name, address, phone number
|`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. | |`phoneNumber` | string | false | Phone number. Maximum length allowed is 1,000 characters. | |`website` | string | false | Website URL. Maximum length allowed is 1,000 characters. |
-|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification][Open Street Map specification]. Maximum length allowed is 1,000 characters. |
+|`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification]. Maximum length allowed is 1,000 characters. |
:::zone-end
The `pointElement` is a class feature that defines a point feature in a unit, su
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | string | true | The ID of a [`unit`] feature containing this feature. Maximum length allowed is 1,000 characters.|
| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
The `pointElement` is a class feature that defines a point feature in a unit, su
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | string | true | The ID of a [`unit`](#unit) feature containing this feature. Maximum length allowed is 1,000 characters.|
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | string | true | The ID of a [`unit`] feature containing this feature. Maximum length allowed is 1,000 characters.|
| `isObstruction` | boolean (Default value is `null`.) | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. |
The `lineElement` is a class feature that defines a line feature in a unit, such
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. |
| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
:::zone-end
The `lineElement` is a class feature that defines a line feature in a unit, such
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. |
| `isObstruction` | boolean (Default value is `null`.)| false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. | |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters. |
-|`anchorPoint` |[Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`anchorPoint` |[Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
+|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
:::zone-end
The `areaElement` is a class feature that defines a polygon feature in a unit, s
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is automatically set to the Azure Maps internal ID. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. |
| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `areaElement` is a class feature that defines a polygon feature in a unit, s
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.|
-|`categoryId` |[category.Id](#category) |true | The ID of a [`category`](#category) feature.|
-| `unitId` | [`unitId`](#unit) | true | The ID of a [`unit`](#unit) feature containing this feature. |
+|`categoryId` |[category.Id] |true | The ID of a [`category`] feature.|
+| `unitId` | [`unitId`] | true | The ID of a [`unit`] feature containing this feature. |
| `isObstruction` | boolean | false | If `true`, this feature represents an obstruction to be avoided while routing through the containing unit feature. |
-|`obstructionArea` | [Polygon][GeoJsonPolygon] or [MultiPolygon][MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
+|`obstructionArea` | [Polygon] or [MultiPolygon] | false | A simplified geometry (when the line geometry is complicated) of the feature that is to be avoided during routing. Requires `isObstruction` set to true.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1,000 characters.| |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1,000 characters.|
-|`anchorPoint` | [Point][geojsonpoint] | false | [GeoJSON Point geometry][geojsonpoint] that represents the feature as a point. Can be used to position the label of the feature.|
+|`anchorPoint` | [Point] | false | [GeoJSON Point geometry] that represents the feature as a point. Can be used to position the label of the feature.|
:::zone-end
The `category` class feature defines category names. For example: "room.conferen
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service], the original ID is set to the Azure Maps internal ID. When the [dataset] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the category with another category in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`name` | string | true | Name of the category. Suggested to use "." to represent hierarchy of categories. For example: "room.conference", "room.privateoffice". Maximum length allowed is 1,000 characters. |
The `category` class feature defines category names. For example: "room.conferen
Learn more about Creator for indoor maps by reading: > [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
-
-[conversion]: /rest/api/maps/v2/conversion
-[geojsonpoint]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
-[GeoJsonPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon
+> [Creator for indoor maps]
+
+<! Internal Links >
+[`areaElement`]: #areaelement
+[`category`]: #category
+[`facility.anchorHeightAboveSeaLevel`]: #facility
+[`facility.defaultLevelVerticalExtent`]: #facility
+[`facility`]: #facility
+[`level`]: #level
+[`lineElement`]: #lineelement
+[`opening`]: #opening
+[`unit`]: #unit
+[`unitId`]: #unit
+[`units`]: #unit
+[`verticalPenetration`]: #verticalpenetration
+[category.Id]: #category
+[directoryInfo.Id]: #directoryinfo
+[directoryInfo]: #directoryinfo
+[facility.Id]: #facility
+[level.Id]: #level
+[structures]: #structure
+<! REST API Links >
+[conversion service]: /rest/api/maps/v2/conversion
+[dataset]: /rest/api/maps/v20220901preview/dataset
+[GeoJSON Point geometry]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
[MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon
-[GeometryObject]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1
+[Point]: /rest/api/maps/v2/wfs/get-features#geojsonpoint
+[Polygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonpolygon
+<! learn.microsoft.com links >
+[Create a dataset using a GeoJson package]: how-to-dataset-geojson.md
+[Creator for indoor maps]: creator-indoor-maps.md
+<! External Links >
+[Azure Maps services]: https://aka.ms/AzureMaps
[feature object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.2
-[datasetv20220901]: /rest/api/maps/v20220901preview/dataset
+[Geometry Object]: https://www.rfc-editor.org/rfc/rfc7946#section-3.1
[Open Street Map specification]: https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
Title: Azure Maps Creator service geographic scope description: Learn about Azure Maps Creator service's geographic mappings in Azure Maps--++ Last updated 05/18/2021
# Creator service geographic scope
-Azure Maps Creator is a geographically scoped service. Creator offers a resource provider API that, given an Azure region, creates an instance of Creator data deployed at the geographical level. The mapping from an Azure region to geography happens behind the scenes as described in the table below. For more details on Azure regions and geographies, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies).
+Azure Maps Creator is a geographically scoped service. Creator offers a resource provider API that, given an Azure region, creates an instance of Creator data deployed at the geographical level. The mapping from an Azure region to geography happens behind the scenes as described in the following table. For more information on Azure regions and geographies, see [Azure geographies].
## Data locations
For disaster recovery and high availability, Microsoft may replicate customer da
The following table describes the mapping between geography and supported Azure regions, and the respective geographic API endpoint. For example, if a Creator account is provisioned in the West US 2 region that falls within the United States geography, all API calls to the Conversion service must be made to `us.atlas.microsoft.com/conversion/convert`. - | Azure Geographic areas (geos) | Azure datacenters (regions) | API geographic endpoint | ||-|-| | Europe| West Europe, North Europe | eu.atlas.microsoft.com | |United States | West US 2, East US 2 | us.atlas.microsoft.com |+
+[Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Title: Work with indoor maps in Azure Maps Creator description: This article introduces concepts that apply to Azure Maps Creator services--++ Last updated 04/01/2022
Use [Data Upload] to upload a drawing package. After the Drawing packing is uplo
## Convert a drawing package
-The [Conversion service](/rest/api/maps/v2/conversion) converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types:
+The [Conversion service] converts an uploaded drawing package into indoor map data. The Conversion service also validates the package. Validation issues are classified into two types:
- Errors: If any errors are detected, the conversion process fails. When an error occurs, the Conversion service provides a link to the [Azure Maps Drawing Error Visualizer] stand-alone web application. You can use the Drawing Error Visualizer to inspect [Drawing package warnings and errors] that occurred during the conversion process. After you fix the errors, you can attempt to upload and convert the package. - Warnings: If any warnings are detected, the conversion succeeds. However, we recommend that you review and resolve all warnings. A warning means that part of the conversion was ignored or automatically fixed. Failing to resolve the warnings could result in errors in later processes.
Azure Maps Creator provides the following services that support map creation:
- [Dataset service]. - [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset.-- [Custom styling service](#custom-styling-preview). Use the [style] service or [visual style editor] to customize the visual elements of an indoor map.
+- [Custom styling service]. Use the [style] service or [visual style editor] to customize the visual elements of an indoor map.
- [Feature State service]. Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system.-- [Wayfinding service](#wayfinding-preview). Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths.
+- [Wayfinding service]. Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths.
### Datasets
-A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets](#tilesets) or [feature statesets](#feature-statesets).
+A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets] or [feature statesets].
-At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance](#data-maintenance).
+At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance].
### Tilesets
To reflect different content stages, you can create multiple tilesets from the s
In addition to the vector data, the tileset provides metadata for map rendering optimization. For example, tileset metadata contains a minimum and maximum zoom level for the tileset. The metadata also provides a bounding box that defines the geographic extent of the tileset. An application can use a bounding box to programmatically set the correct center point. For more information about tileset metadata, see [Tileset List].
-After a tileset is created, it can be retrieved by the [Render V2 service](#render-v2-get-map-tile-api).
+After a tileset is created, it's retrieved using the [Render service].
-If a tileset becomes outdated and is no longer useful, you can delete the tileset. For information about how to delete tilesets, see [Data maintenance](#data-maintenance).
+If a tileset becomes outdated and is no longer useful, you can delete the tileset. For information about how to delete tilesets, see [Data maintenance].
>[!NOTE] >A tileset is independent of the dataset from which it was created. If you create tilesets from a dataset, and then subsequently update that dataset, the tilesets isn't updated.
Example layer in the style.json file:
| type | The rendering type for this layer.<br/>Some of the more common types include:<br/>**fill**: A filled polygon with an optional stroked border.<br/>**Line**: A stroked line.<br/>**Symbol**: An icon or a text label.<br/>**fill-extrusion**: An extruded (3D) polygon. | | filter | Only features that match the filter criteria are displayed. | | layout | Layout properties for the layer. |
-| minzoom | A number between 0 and 24 that represents the minimum zoom level for the layer. At zoom levels less than the minzoom, the layer will be hidden. |
+| minzoom | A number between 0 and 24 that represents the minimum zoom level for the layer. At zoom levels less than the minzoom, the layer is hidden. |
| paint | Default paint properties for this layer. | | source-layer | A source supplies the data, from a vector tile source, displayed on a map. Required for vector tile sources; prohibited for all other source types, including GeoJSON sources.|
Example layer in the style.json file:
The map configuration is an array of configurations. Each configuration consists of a [basemap] and one or more layers, each layer consisting of a [style] + [tileset] tuple.
-The map configuration is used when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration will be created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it will always point to the new map configuration.
+The map configuration is used when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. It's referenced using the `mapConfigurationId` or `alias`. Map configurations are immutable. When making changes to an existing map configuration, a new map configuration is created, resulting in a different `mapConfingurationId`. Anytime you create a map configuration using an alias already used by an existing map configuration, it points to the new map configuration.
-The following JSON is an example of a default map configuration. See the table below for a description of each element of the file:
+The following JSON is an example of a default map configuration. See the following table for a description of each element of the file:
```json {
The following JSON is an example of a default map configuration. See the table b
| Name | The name of the style. | | displayName | The display name of the style. | | description | The user defined description of the style. |
-| thumbnail | Use to specify the thumbnail used in the style picker for this style. For more information, see the [style picker control][style-picker-control]. |
+| thumbnail | Use to specify the thumbnail used in the style picker for this style. For more information, see the [style picker control]. |
| baseMap | Use to Set the base map style. | | layers  | The layers array consists of one or more *tileset + Style* tuples, each being a layer of the map. This enables multiple buildings on a map, each building represented in its own tileset. | #### Additional information -- For more information how to modify styles using the style editor, see [Create custom styles for indoor maps][style-how-to].
+- For more information how to modify styles using the style editor, see [Create custom styles for indoor maps].
- For more information on style Rest API, see [style] in the Maps Creator Rest API reference.-- For more information on the map configuration Rest API, see [Creator - map configuration Rest API][map-config-api].
+- For more information on the map configuration Rest API, see [Creator - map configuration Rest API].
### Feature statesets
Feature statesets are collections of dynamic properties (*states*) that are assi
You can use the [Feature State service] to create and manage a feature stateset for a dataset. The stateset is defined by one or more *states*. Each feature, such as a room, can have one *state* attached to it.
-The value of each *state* in a stateset can be updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API](/rest/api/maps/v2/feature-state/update-states), devices measuring space occupancy can systematically post the state change of a room.
+The value of each *state* in a stateset is updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API], devices measuring space occupancy can systematically post the state change of a room.
-An application can use a feature stateset to dynamically render features in a facility according to their current state and respective map style. For more information about using feature statesets to style features in a rendering map, see [Indoor Maps module](#indoor-maps-module).
+An application can use a feature stateset to dynamically render features in a facility according to their current state and respective map style. For more information about using feature statesets to style features in a rendering map, see [Indoor Maps module].
>[!NOTE] >Like tilesets, changing a dataset doesn't affect the existing feature stateset, and deleting a feature stateset doesn't affect the dataset to which it's attached.
Creator wayfinding is powered by [Havok].
When a [wayfinding path] is successfully generated, it finds the shortest path between two points in the specified facility. Each floor in the journey is represented as a separate leg, as are any stairs or elevators used to move between floors.
-For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg will be the elevator, and then the final leg will be the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON.
+For example, the first leg of the path might be from the origin to the elevator on that floor. The next leg is the elevator, and then the final leg is the path from the elevator to the destination. The estimated travel time is also calculated and returned in the HTTP response JSON.
##### Structure
-For wayfinding to work, the facility data must contain a [structure][structures]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures.
+For wayfinding to work, the facility data must contain a [structure]. The wayfinding service calculates the shortest path between two selected points in a facility. The service creates the path by navigating around structures, such as walls and any other impermeable structures.
##### Vertical penetration
-If the selected origin and destination are on different floors, the wayfinding service determines what [vertical penetration][verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path will be used.
+If the selected origin and destination are on different floors, the wayfinding service determines what [verticalPenetration] objects such as stairs or elevators, are available as possible pathways for navigating vertically between levels. By default, the option that results in the shortest path is used.
-The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration][verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding] API documentation to learn about other factors that can affect the path selection between floor levels.
+The Wayfinding service includes stairs or elevators in a path based on the value of the vertical penetration's `direction` property. For more information on the direction property, see [verticalPenetration] in the Facility Ontology article. See the `avoidFeatures` and `minWidth` properties in the [wayfinding] API documentation to learn about other factors that can affect the path selection between floor levels.
For more information, see the [Indoor maps wayfinding service] how-to article.
For more information, see the [Indoor maps wayfinding service] how-to article.
### Render V2-Get Map Tile API
-The Azure Maps [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile) has been extended to support Creator tilesets.
+The Azure Maps [Render V2-Get Map Tile API] has been extended to support Creator tilesets.
-Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module](#indoor-maps-module).
+Applications can use the Render V2-Get Map Tile API to request tilesets. The tilesets can then be integrated into a map control or SDK. For an example of a map control that uses the Render V2 service, see [Indoor Maps Module].
### Web Feature service API
You can use the [Web Feature service] (WFS) to query datasets. WFS follows the [
### Alias API
-Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API](/rest/api/maps/v2/alias) allows you to assign an alias to reference a resource identifier.
+Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API] allows you to assign an alias to reference a resource identifier.
### Indoor Maps module
As you begin to develop solutions for indoor maps, you can discover ways to inte
The following example shows how to update a dataset, create a new tileset, and delete an old tileset:
-1. Follow steps in the [Upload a drawing package](#upload-a-drawing-package) and [Convert a drawing package](#convert-a-drawing-package) sections to upload and convert the new drawing package.
+1. Follow steps in the [Upload a drawing package] and [Convert a drawing package] sections to upload and convert the new drawing package.
2. Use [Dataset Create] to append the converted data to the existing dataset. 3. Use [Tileset Create] to generate a new tileset out of the updated dataset. 4. Save the new **tilesetId** for the next step.
The following example shows how to update a dataset, create a new tileset, and d
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Creating a Creator indoor map](tutorial-creator-indoor-maps.md)
+> [Tutorial: Creating a Creator indoor map]
> [!div class="nextstepaction"] > [Create custom styles for indoor maps]
-[Azure Maps pricing]: https://aka.ms/CreatorPricing
-[Manage authentication in Azure Maps]: how-to-manage-authentication.md
-[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication
-[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control
-[Drawing package requirements]: drawing-requirements.md
+<!-- Internal Links ->
+[Convert a drawing package]: #convert-a-drawing-package
+[Custom styling service]: #custom-styling-preview
+[Data maintenance]: #data-maintenance
+[feature statesets]: #feature-statesets
+[Indoor Maps module]: #indoor-maps-module
+[Render service]: #render-v2-get-map-tile-api
+[tilesets]: #tilesets
+[Upload a drawing package]: #upload-a-drawing-package
+
+<!-- REST API Links ->
+[Alias API]: /rest/api/maps/v2/alias
+[Conversion service]: /rest/api/maps/v2/conversion
+[Creator - map configuration Rest API]: /rest/api/maps/v20220901preview/map-configuration
[Data Upload]: /rest/api/maps/data-v2/update-
-[style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout
-[sprites]: https://docs.mapbox.com/help/glossary/sprite/
+[Dataset Create]: /rest/api/maps/v2/dataset/create
+[Dataset service]: /rest/api/maps/v2/dataset
+[Feature State service]: /rest/api/maps/v2/feature-state
+[Feature State Update API]: /rest/api/maps/v2/feature-state/update-states
+[Geofence service]: /rest/api/maps/spatial/postgeofence
+[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[routeset]: /rest/api/maps/v20220901preview/routeset
[Style - Create]: /rest/api/maps/v20220901preview/style/create
-[basemap]: supported-map-styles.md
-[Manage Azure Maps Creator]: how-to-manage-creator.md
-[Drawing package warnings and errors]: drawing-conversion-error-codes.md
-[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md
-[Create custom styles for indoor maps]: how-to-create-custom-styles.md
[style]: /rest/api/maps/v20220901preview/style
-[tileset]: /rest/api/maps/v20220901preview/tileset
-[Dataset service]: /rest/api/maps/v2/dataset
-[Dataset Create]: /rest/api/maps/v2/dataset/create
-[Tileset service]: /rest/api/maps/v2/tileset
[Tileset Create]: /rest/api/maps/v2/tileset/create [Tileset List]: /rest/api/maps/v2/tileset/list
-[Feature State service]: /rest/api/maps/v2/feature-state
-[routeset]: /rest/api/maps/v20220901preview/routeset
-[wayfinding]: /rest/api/maps/v20220901preview/wayfinding
-[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding
+[Tileset service]: /rest/api/maps/v2/tileset
+[tileset]: /rest/api/maps/v20220901preview/tileset
[wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/get-path
-[style-picker-control]: choose-map-style.md#add-the-style-picker-control
-[style-how-to]: how-to-create-custom-styles.md
-[map-config-api]: /rest/api/maps/v20220901preview/map-configuration
-[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
-[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
-[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
-[Indoor maps wayfinding service]: how-to-creator-wayfinding.md
-[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html
+[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding
+[wayfinding]: /rest/api/maps/v20220901preview/wayfinding
[Web Feature service]: /rest/api/maps/v2/wfs
-[Azure Maps Web SDK]: how-to-use-map-control.md
-[Use the Indoor Map module]: how-to-use-indoor-module.md
+
+<! learn.microsoft.com Links >
+[Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control
+[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication
+[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md
[Azure Maps services]: index.yml
-[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
-[Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile
+[Azure Maps Web SDK]: how-to-use-map-control.md
+[basemap]: supported-map-styles.md
+[Create custom styles for indoor maps]: how-to-create-custom-styles.md
+[Drawing package requirements]: drawing-requirements.md
+[Drawing package warnings and errors]: drawing-conversion-error-codes.md
+[Indoor maps wayfinding service]: how-to-creator-wayfinding.md
+[Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Manage Azure Maps Creator]: how-to-manage-creator.md
+[structure]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
+[style picker control]: choose-map-style.md#add-the-style-picker-control
+[Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md
[Tutorial: Implement IoT spatial analytics by using Azure Maps]: tutorial-iot-hub-maps.md
-[Geofence service]: /rest/api/maps/spatial/postgeofence
+[Use the Indoor Map module]: how-to-use-indoor-module.md
+[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
+
+<! HTTP Links >
+[Azure Maps pricing]: https://aka.ms/CreatorPricing
[havok]: https://www.havok.com/
+[Open Geospatial Consortium API Features]: https://docs.opengeospatial.org/DRAFTS/17-069r4.html
+[sprites]: https://docs.mapbox.com/help/glossary/sprite/
+[style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout
+[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md
Title: Azure Maps long-running operation API V2 description: Learn about long-running asynchronous V2 background processing in Azure Maps--++ Last updated 05/18/2021
# Creator Long-Running Operation API V2
-Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply). This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing.
+Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern]. This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing.
## Submit a request
-A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API will return an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation.
+A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API returns an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation.
### Example of a success response
Operation-Location: https://atlas.microsoft.com/service/operations/{operationId}
```
-If the call doesn't pass validation, the API will instead return an HTTP `400` response for a Bad Request. The response body will provide the client more information on why the request was invalid.
+If the call doesn't pass validation, the API returns an HTTP `400` response for a Bad Request. The response body provides the client more information on why the request was invalid.
### Monitor the operation status
-The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request will always contain the `status` and the `created` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `created` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header will also be provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made.
+The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request always contains the `status` and the `created` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `created` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header is also provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made.
### Example of running a status response
Retry-After: 30
## Handle operation completion
-Upon completing the long-running operation, the status of the response will either be `"Succeeded"` or `"Failed"`. All responses will return an HTTP 200 OK code. When a new resource has been created from a long-running operation, the response will also contain a `Resource-Location` header that points to metadata about the resource. Upon a failure, the response will have an `error` property in the body. The error data adheres to the OData error specification.
+Once the long-running operation completes, the status of the response is either `"Succeeded"` or `"Failed"`. All responses return an HTTP 200 OK code. When a new resource has been created from a long-running operation, the response also contains a `Resource-Location` header that points to metadata about the resource. Upon a failure, the response has an `error` property in the body. The error data adheres to the OData error specification.
### Example of success response
Status: 200 OK
} } ```+
+[Asynchronous Request-Reply pattern]: /azure/architecture/patterns/async-request-reply
azure-maps Creator Long Running Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md
Title: Azure Maps Long-Running Operation API description: Learn about long-running asynchronous background processing in Azure Maps--++ Last updated 12/07/2020
# Creator Long-Running Operation API
-Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply). This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing.
+Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern]. This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing.
## Submitting a request
-A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API will return an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation.
+A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API returns an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation.
### Example of a success response
Location: https://atlas.microsoft.com/service/operations/{operationId}
```
-If the call doesn't pass validation, the API will instead return an HTTP `400` response for a Bad Request. The response body will provide the client more information on why the request was invalid.
+If the call doesn't pass validation, the API returns an HTTP `400` response for a Bad Request. The response body provides the client more information on why the request was invalid.
### Monitoring the operation status
-The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request will always contain the `status` and the `createdDateTime` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `createdDateTime` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header will also be provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made.
+The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request contains the `status` and the `createdDateTime` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `createdDateTime` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header is also provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made.
### Example of running a status response
Retry-After: 30
## Handling operation completion
-Upon completing the long-running operation, the status of the response will either be `"Succeeded"` or `"Failed"`. When a new resource has been created from a long-running operation, the success response will return an HTTP `201 Created` status code. The response will also contain a `Location` header that points to metadata about the resource. When no new resource has been created, the success response will return an HTTP `200 OK` status code. Upon a failure, the response status code will also be the `200 OK`code. However, the response will have an `error` property in the body. The error data adheres to the OData error specification.
+Once the long-running operation completes, the status of the response is either `"Succeeded"` or `"Failed"`. When a new resource has been created from a long-running operation, the success response returns an HTTP `201 Created` status code. The response also contains a `Location` header that points to metadata about the resource. When no new resource has been created, the success response returns an HTTP `200 OK` status code. Upon a failure, the response status code is also `200 OK`. However, the response has an `error` property in the body. The error data adheres to the OData error specification.
### Example of success response
Status: 200 OK
} } ```+
+[Asynchronous Request-Reply pattern]: /azure/architecture/patterns/async-request-reply
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
When preparing your facility drawing files for the Conversion service, make sure
## Step 2: Prepare the DWG files
-This part of the guide will show you how to use CAD commands to ensure that your DWG files meet the requirements of the Conversion service.
+This part of the guide shows you how to use CAD commands to ensure that your DWG files meet the requirements of the Conversion service.
You may choose any CAD software to open and prepare your facility drawing files. However, this guide is created using Autodesk's AutoCAD® software. Any commands referenced in this guide are meant to be executed using Autodesk's AutoCAD® software.
The following image is taken from the sample package, and shows the exterior lay
### Unit layer
-Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone won't create a unit because there isn’t an entity that represents the unit.
+Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone doesn't create a unit because there isn’t an entity that represents the unit.
The following image is taken from the [sample drawing package] and shows the unit label layer and unit layer in red. All other layers are turned off to help with visualization. Also, one unit is selected to help show that each unit is a closed Polyline.
The following image is taken from the [sample drawing package] and shows the uni
### Unit label layer
-If you'd like to add a name property to a unit, you'll need to add a separate layer for unit labels. Labels must be provided as single-line text entities that fall inside the bounds of a unit. A corresponding unit property must be added to the manifest file where the `unitName` matches the Contents of the Text. To learn about all supported unit properties, see [`unitProperties`](#unitproperties).
+If you'd like to add a name property to a unit, add a separate layer for unit labels. Labels must be provided as single-line text entities that fall inside the bounds of a unit. A corresponding unit property must be added to the manifest file where the `unitName` matches the Contents of the Text. To learn about all supported unit properties, see [`unitProperties`](#unitproperties).
### Door layer
The `georeference` object is used to specify where the facility is located geogr
### dwgLayers
-The `dwgLayers` object is used to specify that DWG layer names where feature classes can be found. To receive a property converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, they'll be ignored by the Azure Maps Conversion service if they're not specified in the manifest.
+The `dwgLayers` object is used to specify that DWG layer names where feature classes can be found. To receive a property converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores them if they're not specified in the manifest.
The following example of the `dwgLayers` object in the manifest.
The `unitProperties` object allows you to define other properties for a unit tha
The following image is taken from the [sample drawing package]. It displays the unit label that's associated to the unit property in the manifest. The following snippet shows the unit property object that is associated with the unit.
The following snippet shows the unit property object that is associated with the
## Step 4: Prepare the Drawing Package
-You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files will need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package].
+You should now have all the DWG drawings prepared to meet Azure Maps Conversion service requirements. A manifest file has also been created to help describe the facility. All files need to be zipped into a single archive file, with the `.zip` extension. It's important that the manifest file is named `manifest.json` and is placed in the root directory of the zipped package. All other files can be in any directory of the zipped package if the filename includes the relative path to the manifest. For an example of a drawing package, see the [sample drawing package].
:::zone-end
You can use the [Azure Maps Creator onboarding tool] to create new and edit exis
To process the DWG files, enter the geography of your Azure Maps Creator resource, the subscription key of your Azure Maps account and the path and filename of the DWG ZIP package, the select **Process**. This process can take several minutes to complete. ### Facility levels
The following example is taken from the [sample drawing package v2]. The facilit
The `dwgLayers` object is used to specify the DWG layer names where feature classes can be found. To receive a properly converted facility, it's important to provide the correct layer names. For example, a DWG wall layer must be provided as a wall layer and not as a unit layer. The drawing can have other layers such as furniture or plumbing; but, the Azure Maps Conversion service ignores anything not specified in the manifest. Defining text properties enables you to associate text entities that fall inside the bounds of a feature. Once defined they can be used to style and display elements on your indoor map > [!IMPORTANT] > Wayfinding support for `Drawing Package 2.0` will be available soon. The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors:
The **Anchor Point Angle** is specified in degrees between true north and the dr
You position the facility's location by entering either an address or longitude and latitude values. You can also pan the map to make minor adjustments to the facility's location. ### Review and download
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
The [Conversion service] does the following on each DWG file:
- Walls - Vertical penetrations - Produces a *Facility* feature. -- Produces a minimal set of default Category features to be referenced by other features:
+- Produces a minimal set of default Category features referenced by other features:
- room - structure - wall
The [Conversion service] does the following on each DWG file:
## DWG file requirements
-A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels will have three DWG files in the drawing package.
+A single DWG file is required for each level of the facility. All data of a single level must be contained in a single DWG file. Any external references (_xrefs_) must be bound to the parent drawing. For example, a facility with three levels has three DWG files in the drawing package.
Each DWG file must adhere to the following requirements:
Each DWG layer must adhere to the following rules:
- Self-intersecting polygons are permitted, but are automatically repaired. When they repaired, the [Conversion service] raises a warning. It's advisable to manually inspect the repaired results, because they might not match the expected results. - Each layer has a supported list of entity types. Any other entity types in a layer will be ignored. For example, text entities aren't supported on the wall layer.
-The table below outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Conversion service] ignores those entities.
+The following table outlines the supported entity types and converted map features for each layer. If a layer contains unsupported entity types, then the [Conversion service] ignores those entities.
| Layer | Entity types | Converted Features | | :-- | :-| :-
The table below outlines the supported entity types and converted map features f
| [UnitLabel](#unitlabel-layer) | Text (single line) | Not applicable. This layer can only add properties to the unit features from the Units layer. For more information, see the [UnitLabel layer](#unitlabel-layer). | [ZoneLabel](#zonelabel-layer) | Text (single line) | Not applicable. This layer can only add properties to zone features from the ZonesLayer. For more information, see the [ZoneLabel layer](#zonelabel-layer).
-The sections below describe the requirements for each layer.
+The following sections describe the requirements for each layer.
### Exterior layer The DWG file for each level must contain a layer to define that level's perimeter. This layer is referred to as the *exterior* layer. For example, if a facility contains two levels, then it needs to have two DWG files, with an exterior layer for each file.
-No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) will contain only one level feature for each DWG file. Additionally:
+No matter how many entity drawings are in the exterior layer, the [resulting facility dataset](tutorial-creator-feature-stateset.md) contains only one level feature for each DWG file. Additionally:
- Exteriors must be drawn as POLYGON, POLYLINE (closed), CIRCLE, or ELLIPSE (closed). - Exteriors may overlap, but are dissolved into one geometry.
The `unitProperties` object contains a JSON array of unit properties.
|`verticalPenetrationDirection`| string| false |If `verticalPenetrationCategory` is defined, optionally define the valid direction of travel. The permitted values are: `lowToHigh`, `highToLow`, `both`, and `closed`. The default value is `both`. The value is case-sensitive.| | `nonPublic` | bool | false | Indicates if the unit is open to the public. | | `isRoutable` | bool | false | When this property is set to `false`, you can't go to or through the unit. The default value is `true`. |
-| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit won't be reachable by a navigating agent.|
+| `isOpenArea` | bool | false | Allows the navigating agent to enter the unit without the need for an opening attached to the unit. By default, this value is set to `true` for units with no openings, and `false` for units with openings. Manually setting `isOpenArea` to `false` on a unit with no openings results in a warning, because the resulting unit isn't reachable by a navigating agent.|
### `zoneProperties`
The `zoneProperties` object contains a JSON array of zone properties.
### Sample drawing package manifest
-Below is the manifest file for the sample drawing package. Go to the [Sample drawing package] for Azure Maps Creator on GitHub to download the entire package.
+The following is the manifest file for the sample drawing package. Go to the [Sample drawing package] for Azure Maps Creator on GitHub to download the entire package.
#### Manifest file
One or more DWG layer(s) can be mapped to a user defined feature class. One inst
Text entities that fall within the bounds of a closed shape can be associated to that feature as a property. For example, a room feature class might have text that describes the room name and another the room type [sample drawing package v2]. Additionally: -- Only TEXT and MTEXT entities will be associated to the feature as a property. All other entity types will be ignored.
+- Only TEXT and MTEXT entities are associated to the feature as a property. All other entity types are ignored.
- The TEXT and MTEXT justification point must fall within the bounds of the closed shape.-- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one will be randomly selected.
+- If more than one TEXT property is within the bounds of the closed shape and both are mapped to one property, one is randomly selected.
### Facility level
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Title: Drawing tool events | Microsoft Azure Maps
-description: In this article you'll learn, how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK
+description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK
Last updated 12/05/2019
When using drawing tools on a map, it's useful to react to certain events as the
| Event | Description | |-|-|
-| `drawingchanged` | Fired when any coordinate in a shape has been added or changed. |
-| `drawingchanging` | Fired when any preview coordinate for a shape is being displayed. For example, this event will fire multiple times as a coordinate is dragged. |
+| `drawingchanged` | Fired when any coordinate in a shape has been added or changed. |
+| `drawingchanging` | Fired when any preview coordinate for a shape is being displayed. For example, this event fires multiple times as a coordinate is dragged. |
| `drawingcomplete` | Fired when a shape has finished being drawn or taken out of edit mode. | | `drawingerased` | Fired when a shape is erased from the drawing manager when in `erase-geometry` mode. | | `drawingmodechanged` | Fired when the drawing mode has changed. The new drawing mode is passed into the event handler. |
This code searches for points of interests inside the area of a shape after the
### Create a measuring tool
-The code below shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information.
+The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information.
<br/>
The code below shows how the drawing events can be used to create a measuring to
## Next steps
-Learn how to use additional features of the drawing tools module:
+Learn how to use other features of the drawing tools module:
> [!div class="nextstepaction"] > [Get shape data](map-get-shape-data.md)
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
When you create an indoor map using Azure Maps Creator, default styles are appli
## Create custom styles using Creators visual editor
-While it's possible to modify your indoor maps styles using [Creators Rest API], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article will focus exclusively on creating custom styles using this style editor.
+While it's possible to modify your indoor maps styles using [Creators Rest API], Creator also offers a [visual style editor][style editor] to create custom styles that doesn't require coding. This article focuses exclusively on creating custom styles using this style editor.
### Open style
-When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, you'll need to open that default style.
+When an indoor map is created in your Azure Maps Creator service, default styles are automatically created for you. In order to customize the styling elements of your indoor map, open that default style.
Open the [style editor] and select the **Open** toolbar button.
Select the **Get map configuration list** button to get a list of every map conf
:::image type="content" source="./media/creator-indoor-maps/style-editor/select-the-map-configuration.png" alt-text="A screenshot of the open style dialog box in the visual style editor with the Select map configuration drop-down list highlighted."::: > [!NOTE]
-> If the map configuration was created as part of a custom style and has a user provided alias, that alias will appear in the map configuration drop-down list, otherwise the `mapConfigurationId` will appear. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID:
+> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID:
> > ```http > https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview
Select the **Get map configuration list** button to get a list of every map conf
> "defaultMapConfigurationId": "68d74ad9-4f84-99ce-06bb-19f487e8e692" > ```
-Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples will appear. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`.
+Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples appears. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`.
Once you've selected the desired style, select the **Load selected style** button.
Once you've selected the desired style, select the **Load selected style** butto
||| | 1 | Your Azure Maps account [subscription key] | | 2 | Select the geography of the Azure Maps account. |
-| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` will be shown instead. |
-| 4 | This value is created from a combination of the style and tileset. If the style has as alias it will be shown, if not the `styleId` will be shown. The `tilesetId` will always be shown for the tileset value. |
+| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` is shown instead. |
+| 4 | This value is created from a combination of the style and tileset. If the style has an alias it's shown, if not the `styleId` is shown. The `tilesetId` is always shown for the tileset value. |
### Modify style
Once your style is open in the visual editor, you can begin to modify the variou
#### Change background color
-To change the background color for all units in the specified layer, put your mouse pointer over the desired unit and select it using the left mouse button. YouΓÇÖll be presented with a popup menu showing the layers that are associated with the categories the unit is associated with. Once you select the layer that you wish to update the style properties on, youΓÇÖll see that layer ready to be updated in the left pane.
+To change the background color for all units in the specified layer, put your mouse pointer over the desired unit and select it using the left mouse button. YouΓÇÖre presented with a popup menu showing the layers that are associated with the categories the unit is associated with. Once you select the layer that you wish to update the style properties on, that layer is ready to be updated in the left pane.
:::image type="content" source="./media/creator-indoor-maps/style-editor/visual-editor-select-layer.png" alt-text="A screenshot of the unit layer pop-up dialog box in the visual style editor." lightbox="./media/creator-indoor-maps/style-editor/visual-editor-select-layer.png":::
Open the color palette and select the color you wish to change the selected unit
#### Base map
-The base map drop-down list on the visual editor toolbar presents a list of base map styles that affect the style attributes of the base map that your indoor map is part of. It will not affect the style elements of your indoor map but will enable you to see how your indoor map will look with the various base maps.
+The base map drop-down list on the visual editor toolbar presents a list of base map styles that affect the style attributes of the base map that your indoor map is part of. It doesn't affect the style elements of your indoor map but enables you to see how your indoor map looks with the various base maps.
:::image type="content" source="./media/creator-indoor-maps/style-editor/base-map-menu.png" alt-text="A screenshot of the base maps drop-down list in the visual editor toolbar.":::
To save your changes, select the **Save** button on the toolbar.
:::image type="content" source="./media/creator-indoor-maps/style-editor/save-menu.png" alt-text="A screenshot of the save menu in the visual style editor.":::
-The will bring up the **Upload style & map configuration** dialog box:
+This brings up the **Upload style & map configuration** dialog box:
:::image type="content" source="./media/creator-indoor-maps/style-editor/upload-style-map-config.png" alt-text="A screenshot of the upload style and map configuration dialog box in the visual style editor.":::
The following table describes the four fields you're presented with.
| Property | Description | |-|-| | Style description | A user-defined description for this style. |
-| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style will need to be referenced by the style ID if no alias is provided. |
+| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style is referenced by the style ID if no alias is provided. |
| Map configuration description | A user-defined description for this map configuration. |
-| Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration will need to be referenced by the map configuration ID if no alias is provided. |
+| Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration is referenced by the map configuration ID if no alias is provided. |
Some important things to know about aliases: 1. Can be named using alphanumeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_).
-1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since neither the style or map configuration can be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times.
+1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since the style and map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times.
> [!WARNING] > Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration] in the concepts article for more information.
Once you have entered values into each required field, select the **Upload map c
Azure Maps Creator has defined a list of [categories]. When you create your [manifest], you associate each unit in your facility to one of these categories in the [unitProperties] object.
-There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for those with hearing impairments.
+There may be times when you want to create a new category. For example, you may want the ability to apply different styling attributes to all rooms with special accommodations for people with disabilities like a phone room with phones that have screens showing what the caller is saying for people with hearing impairments.
To do this, enter the desired value in the `categoryName` for the desired `unitName` in the manifest JSON before uploading your drawing package. :::image type="content" source="./media/creator-indoor-maps/style-editor/category-name.png" alt-text="A screenshot showing the custom category name in the manifest.":::
-Once opened in the visual editor, you'll notice that this category name isn't associated with any layer and has no default styling. In order to apply styling to it, you'll need to create a new layer and add the new category to it.
+The category name isn't associated with any layer when viewed in a visual editor and has no default styling. In order to apply styling to it, create a new layer and add the new category to it.
:::image type="content" source="./media/creator-indoor-maps/style-editor/category-name-changed.png" alt-text="A screenshot showing the difference in the layers that appear after changing the category name in the manifest.":::
For example, the filter JSON might look something like this:
] ```
-Now when you select that unit in the map, the pop-up menu will have the new layer ID, which if following this example would be `indoor_unit_room_accessible`. Once selected you can make style edits.
+Now when you select that unit in the map, the pop-up menu has the new layer ID, which if following this example would be `indoor_unit_room_accessible`. Once selected you can make style edits.
:::image type="content" source="./media/creator-indoor-maps/style-editor/custom-category-name-complete.png" alt-text="A screenshot of the pop-up menu showing the new layer appearing when the phone 11 unit is selected.":::
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
Title: Create your Azure Maps account using an Azure Resource Manager template in Azure Maps description: Learn how to create an Azure Maps account using an Azure Resource Manager template.--++ Last updated 04/27/2021
You can create your Azure Maps account using an Azure Resource Manager (ARM) tem
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.maps%2Fmaps-create%2Fazuredeploy.json)
If your environment meets the prerequisites and you're familiar with using ARM t
To complete this article:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, create a [free account] before you begin.
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/maps-create/).
+The template used in this quickstart is from [Azure Quickstart Templates].
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.maps/maps-create/azuredeploy.json":::
The Azure Maps account resource is defined in this template:
Unless it's specified, use the default value to create your Azure Maps account. * **Subscription**: select an Azure subscription.
- * **Resource group**: select **Create new**, enter a unique name for the resource group, and then click **OK**.
+ * **Resource group**: select **Create new**, enter a unique name for the resource group, and then select **OK**.
* **Location**: select a location. * **Account Name**: enter a name for your Azure Maps account, which must be globally unique. * **Pricing Tier**: select the appropriate pricing tier, the default value for the template is S0. 3. Select **Review + create**.
-4. Confirm your settings on the review page and click **Create**. After your Azure Maps has been deployed successfully, you get a notification:
+4. Confirm your settings on the review page and select **Create**. Once deployed successfully, you get a notification:
![ARM template deploy portal notification](./media/how-to-create-template/resource-manager-template-portal-deployment-notification.png)
-The Azure portal is used to deploy your template. You can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
+The Azure portal is used to deploy your template. You can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates].
## Review deployed resources
az group delete --name MyResourceGroup
## Next steps
-To learn more about Azure Maps and Azure Resource Manager, continue on to the articles below.
+To learn more about Azure Maps and Azure Resource Manager, see the following articles:
-- Create an Azure Maps [demo application](quick-demo-map-app.md)-- Learn more about [ARM templates](../azure-resource-manager/templates/overview.md)
+* Create an Azure Maps [demo application]
+* Learn more about [ARM templates]
+
+[free account]: https://azure.microsoft.com/free/?WT.mc_id=A261C142F
+[Azure Quickstart Templates]: https://azure.microsoft.com/resources/templates/maps-create
+[demo application]: quick-demo-map-app.md
+[ARM templates]: ../azure-resource-manager/templates/overview.md
+[Deploy templates]: ../azure-resource-manager/templates/deploy-powershell.md
azure-maps How To Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-feature-stateset.md
If using a tool like [Postman], it should look like this:
:::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of Postman showing the Header tab of the POST request that shows the Content Type Key with a value of application forward slash json.":::
-Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value:
+Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value:
```json {
Finally, in the **Body** of the HTTP request, include the style information in r
} ```
-After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it will appear as follows:
+After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it appears as follows:
:::image type="content" source="./media/tutorial-creator-indoor-maps/response-stateset-id.png"alt-text="A screenshot of Postman showing the resource Stateset ID value in the responses body."::: ## Update a feature state
-In this section you will learn how to update the `occupied` state of the unit with feature `id` "UNIT26". To do this, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset](#create-a-feature-stateset)):
+This section demonstrates how to update the `occupied` state of the unit with feature `id` "UNIT26". To update the `occupied` state, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset]):
```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
If using a tool like [Postman], it should look like this:
:::image type="content" source="./media/tutorial-creator-indoor-maps/stateset-header.png"alt-text="A screenshot of the header tab information for stateset creation.":::
-Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, this applies different colors to the `occupied` property depending on its value:
+Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value:
```json {
Finally, in the **Body** of the HTTP request, include the style information in r
>[!NOTE] > The update will be saved only if the time posted stamp is after the time stamp of the previous request.
-Once the HTTP request is sent and the update completes, you'll receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map.
+Once the HTTP request is sent and the update completes, you receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map.
## Additional information
-* For information on how to retrieve the state of a feature using its feature id, see [Feature State - List States].
+* For information on how to retrieve the state of a feature using its feature ID, see [Feature State - List States].
* For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset]. * For information on using the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps].
Learn how to implement dynamic styling for indoor maps.
> [!div class="nextstepaction"] > [dynamic styling]
+<! Internal Links >
+[Create a feature stateset]: #create-a-feature-stateset
+
+<! learn.microsoft.com links >
[Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services
-[Query datasets with WFS API]: how-to-creator-wfs.md
-[Stateset API]: /rest/api/maps/v2/feature-state/create-stateset
-[Feature Statesets API]: /rest/api/maps/v2/feature-state/create-stateset
-[Feature statesets]: /rest/api/maps/v2/feature-state
[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
+[Creator Indoor Maps]: creator-indoor-maps.md
[dynamic styling]: indoor-map-dynamic-styling.md
-[Feature State - List States]: /rest/api/maps/v2/feature-state/list-states
-[Feature State - Delete Stateset]: /rest/api/maps/v2/feature-state/delete-stateset
-[Feature State service]: /rest/api/maps/v2/feature-state
[Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md
-[Creator Indoor Maps]: creator-indoor-maps.md
+[Query datasets with WFS API]: how-to-creator-wfs.md
+
+<! External Links >
[Postman]: https://www.postman.com/+
+<! REST API Links >
+[Feature State - Delete Stateset]: /rest/api/maps/v2/feature-state/delete-stateset
+[Feature State - List States]: /rest/api/maps/v2/feature-state/list-states
+[Feature State service]: /rest/api/maps/v2/feature-state
+[Feature Statesets API]: /rest/api/maps/v2/feature-state/create-stateset
+[Feature statesets]: /rest/api/maps/v2/feature-state
+[Stateset API]: /rest/api/maps/v2/feature-state/create-stateset
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
The Azure Maps Creator [wayfinding service] allows you to navigate from place to
A [routeset] is a collection of indoor map data that is used by the wayfinding service.
-A routeset is created from a dataset, but is independent from that dataset. This means that if the dataset is deleted, the routeset continues to exist.
+A routeset is created from a dataset. The routeset is independent from the dataset, meaning if the dataset is deleted, the routeset continues to exist.
Once you've created a routeset, you can then use the wayfinding API to get a path from the starting point to the destination point within the facility.
To create a routeset:
1. Copy the value of the **Operation-Location** key from the response header.
-This is the status URL that you'll use to check the status of the routeset creation in the next section.
+The **Operation-Location** key is the status URL used to check the status of the routeset creation as demonstrated in the next section.
### Check the routeset creation status and retrieve the routesetId
To check the status of the routeset creation process and retrieve the routesetId
> [!NOTE] > Get the `operationId` from the Operation-Location key in the response header when creating a new routeset.
-1. Copy the value of the **Resource-Location** key from the responses header. This is the resource location URL and contains the `routesetId`, as shown below:
+1. Copy the value of the **Resource-Location** key from the responses header. It's the resource location URL and contains the `routesetId`:
> https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview
-Make a note of the `routesetId`, it will be required parameter in all [wayfinding](#get-a-wayfinding-path) requests, and when your [Get the facility ID](#get-the-facility-id).
+Make a note of the `routesetId`. It's required in all [wayfinding](#get-a-wayfinding-path) requests and when you [Get the facility ID].
### Get the facility ID
The `facilityId`, a property of the routeset, is a required parameter when searc
## Get a wayfinding path
-In this section, youΓÇÖll use the [wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding] in the concepts article.
+Use the [wayfinding API] to generate a path from the routeset you created in the previous section. The wayfinding API requires a query that contains start and end points in an indoor map, along with floor level ordinal numbers. For more information about Creator wayfinding, see [wayfinding] in the concepts article.
To create a wayfinding query:
-1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status](#check-the-routeset-creation-status-and-retrieve-the-routesetid) section and the {facilityId} with the facilityId obtained in the [Get the facility ID](#get-the-facility-id) section):
+1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status] section and the {facilityId} with the facilityId obtained in the [Get the facility ID] section):
```http https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width}
The wayfinding service calculates the path through specific intervening points.
<!-- TODO: ## Implement the wayfinding service in your map (Refer to sample app once completed) -->
+<! Internal Links >
+[Check the routeset creation status]: #check-the-routeset-creation-status-and-retrieve-the-routesetid
+[Get the facility ID]: #get-the-facility-id
+<! learn.microsoft.com links >
+[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services
+[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
[Creator concepts]: creator-indoor-maps.md [dataset]: creator-indoor-maps.md#datasets [tileset]: creator-indoor-maps.md#tilesets
-[routeset]: /rest/api/maps/v20220901preview/routeset
+[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
+[wayfinding service]: creator-indoor-maps.md#wayfinding-preview
[wayfinding]: creator-indoor-maps.md#wayfinding-preview
+<! REST API Links >
+[routeset]: /rest/api/maps/v20220901preview/routeset
[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding
-[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services
-[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
-[wayfinding service]: creator-indoor-maps.md#wayfinding-preview
-[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
The response body is returned in GeoJSON format and contains all collections in
## Query for unit feature collection
-In this section, you'll query [WFS API] for the `unit` feature collection.
+This section demonstrates querying [WFS API] for the `unit` feature collection.
To query the unit collection in your dataset, create a new **HTTP GET Request**:
To query the unit collection in your dataset, create a new **HTTP GET Request**:
https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 ```
-After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". You'll use "UNIT26" as your feature `id` when you [Update a feature state].
+After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". Use "UNIT26" as your features `id` when you [Update a feature state].
```json {
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&
### Create a dataset
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
+A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
> [!IMPORTANT] > This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted drawing package.
See [Next steps](#next-steps) for links to articles to help you complete your in
## Add data to an existing dataset
-Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
+Data can be added to an existing dataset by providing the `datasetId` parameter to the [Dataset Create API] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future.
https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio
| Identifier | Description | |--|-| | conversionId | The ID returned when converting your drawing package. For more information, see [Convert a drawing package]. |
-| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). |
+| datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package. |
## Geojson zip package requirements
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
### Facility ontology 2.0 validations in the Dataset
-[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail below.
+[Facility ontology] defines how Azure Maps Creator internally stores facility data, divided into feature classes, in a Creator dataset. When importing a GeoJSON package, anytime a feature is added or modified, a series of validations run. This includes referential integrity checks and geometry and attribute validations. These validations are described in more detail in the following list.
- The maximum number of features that can be imported into a dataset at a time is 150,000. - The facility area can be between 4 and 4,000 Sq Km.
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
> [!div class="nextstepaction"] > [Create a tileset]
-[Data Upload API]: /rest/api/maps/data-v2/upload
-[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md
+<! learn.microsoft.com links >
[Access to Creator services]: how-to-manage-creator.md#access-to-creator-services-
-[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
-[units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit
-[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
-[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level
-[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility
-[verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
-[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening
[area]: creator-facility-ontology.md?pivots=facility-ontology-v2#areaelement
-[line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement
-[point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement
-
-[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Convert a drawing package]: tutorial-creator-indoor-maps.md#convert-a-drawing-package
+[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset
+[Creator for indoor maps]: creator-indoor-maps.md
+[Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md
[Creator resource]: how-to-manage-creator.md
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2
-[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
[dataset]: creator-indoor-maps.md#datasets
-[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create
+[Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2
+[facility]: creator-facility-ontology.md?pivots=facility-ontology-v2#facility
+[level]: creator-facility-ontology.md?pivots=facility-ontology-v2#level
+[line]: creator-facility-ontology.md?pivots=facility-ontology-v2#lineelement
+[openings]: creator-facility-ontology.md?pivots=facility-ontology-v2#opening
+[point]: creator-facility-ontology.md?pivots=facility-ontology-v2#pointelement
+[structures]: creator-facility-ontology.md?pivots=facility-ontology-v2#structure
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[units]: creator-facility-ontology.md?pivots=facility-ontology-v2#unit
+[verticalPenetrations]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration
+<! REST API Links >
+[Data Upload API]: /rest/api/maps/data-v2/upload
+[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create
[Dataset Create]: /rest/api/maps/v2/dataset/create
+<! External Links >
+[Contoso building sample]: https://github.com/Azure-Samples/am-creator-indoor-data-examples
+[RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html
[Visual Studio]: https://visualstudio.microsoft.com/downloads/
-[Creator for indoor maps]: creator-indoor-maps.md
-[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
# Tutorial: Use Creator to create indoor maps
-This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. In this tutorial, you'll learn how to:
+This tutorial describes how to create indoor maps for use in Microsoft Azure Maps. This tutorial demonstrates how to:
> [!div class="checklist"] >
This tutorial uses the [Postman] application, but you can use a different API de
>[!IMPORTANT] > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator services].
-> * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+> * Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key in the URL examples.
## Upload a drawing package
To upload the drawing package:
11. Select **Select File**, and then select a drawing package.
- :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, this is used to select the drawing package to import into Creator.":::
+ :::image type="content" source="./media/tutorial-creator-indoor-maps/data-upload-body.png" alt-text="A screenshot of Postman showing the body tab in the POST window, with Select File highlighted, it's used to select the drawing package to import into Creator.":::
12. Select **Send**.
To check the status of the drawing package and retrieve its unique ID (`udid`):
4. Select the **GET** HTTP method.
-5. Enter the `status URL` you copied as the last step in the previous section of this article. The request should look like the following URL:
+5. Enter the `status URL` you copied as the last step in the previous section. The request should look like the following URL:
```http https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
To retrieve content metadata:
4. . Select the **GET** HTTP method.
-5. Enter the `resource Location URL` you copied as the last step in the previous section of this article:
+5. Enter the `resource Location URL` you copied as the last step in the previous section:
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
To retrieve content metadata:
## Convert a drawing package
-Now that the drawing package is uploaded, you'll use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article.
+Now that the drawing package is uploaded, you use the `udid` for the uploaded package to convert the package into map data. The [Conversion API] uses a long-running transaction that implements the pattern defined in the [Creator Long-Running Operation] article.
To convert a drawing package:
To convert a drawing package:
7. In the response window, select the **Headers** tab.
-8. Copy the value of the **Operation-Location** key. This is the `status URL` that you'll use to check the status of the conversion.
+8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the conversion.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-convert-location-url.png" border="true" alt-text="A screenshot of Postman showing the URL value of the operation location key in the responses header.":::
To check the status of the conversion process and retrieve the `conversionId`:
7. In the response window, select the **Headers** tab.
-8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which can be used by other APIs to access the converted map data.
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`conversionId`), which is used by other APIs to access the converted map data.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-conversion-id.png" alt-text="A screenshot of Postman highlighting the conversion ID value that appears in the resource location key in the responses header.":::
To create a dataset:
7. In the response window, select the **Headers** tab.
-8. Copy the value of the **Operation-Location** key. This is the `status URL` that you'll use to check the status of the dataset.
+8. Copy the value of the **Operation-Location** key, it contains the `status URL` that you use to check the status of the dataset.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-dataset-location-url.png" border="true" alt-text="A screenshot of Postman showing the value of the operation location key for dataset in the responses header.":::
To create a tileset:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section above:
+5. Enter the following URL to the [Tileset service]. The request should look like the following URL (replace `{datasetId`} with the `datasetId` obtained in the [Check the dataset creation status](#check-the-dataset-creation-status) section:
```http https://us.atlas.microsoft.com/tilesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Primary-Subscription-key}
To create a tileset:
7. In the response window, select the **Headers** tab.
-8. Copy the value of the **Operation-Location** key, this is the `status URL`, which you'll use to check the status of the tileset.
+8. Copy the value of the **Operation-Location** key, it contains the `status URL`, which you use to check the status of the tileset.
:::image type="content" source="./media/tutorial-creator-indoor-maps/data-tileset-location-url.png" border="true" alt-text="A screenshot of Postman highlighting the status URL that is the value of the operation location key in the responses header.":::
Once your tileset creation completes, you can get the `mapConfigurationId` using
6. Select **Send**.
-7. The tileset JSON will appear in the body of the response, scroll down to see the `mapConfigurationId`:
+7. The tileset JSON appears in the body of the response, scroll down to see the `mapConfigurationId`:
```json "defaultMapConfigurationId": "5906cd57-2dba-389b-3313-ce6b549d4396"
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
To create release annotations, install one of the many Azure DevOps extensions a
1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization.
- ![Screenshot that shows selecting an Azure DevOps organization and selecting Install.](./media/annotations/1-install.png)
+ :::image type="content" source="./media/annotations/1-install.png" lightbox="./media/annotations/1-install.png" alt-text="Screenshot that shows selecting an Azure DevOps organization and selecting Install.":::
You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
Create a separate API key for each of your Azure Pipelines release templates.
1. Open the **API Access** tab and copy the **Application Insights ID**.
- ![Screenshot that shows under API Access, copying the Application ID.](./media/annotations/2-app-id.png)
+ :::image type="content" source="./media/annotations/2-app-id.png" lightbox="./media/annotations/2-app-id.png" alt-text="Screenshot that shows under API Access, copying the Application ID.":::
1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments. 1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu.
- ![Screenshot that shows selecting Add Task and Application Insights Release Annotation.](./media/annotations/3-add-task.png)
+ :::image type="content" source="./media/annotations/3-add-task.png" lightbox="./media/annotations/3-add-task.png" alt-text="Screenshot that shows selecting Add Task and Application Insights Release Annotation.":::
> [!NOTE] > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents. 1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.
- ![Screenshot that shows pasting the Application Insights ID.](./media/annotations/4-paste-app-id.png)
+ :::image type="content" source="./media/annotations/4-paste-app-id.png" lightbox="./media/annotations/4-paste-app-id.png" alt-text="Screenshot that shows pasting the Application Insights ID.":::
1. Back in the Application Insights **API Access** window, select **Create API Key**.
- ![Screenshot that shows selecting the Create API Key on the API Access tab.](./media/annotations/5-create-api-key.png)
+ :::image type="content" source="./media/annotations/5-create-api-key.png" lightbox="./media/annotations/5-create-api-key.png" alt-text="Screenshot that shows selecting the Create API Key on the API Access tab.":::
1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
- ![Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key.](./media/annotations/6-create-api-key.png)
+ :::image type="content" source="./media/annotations/6-create-api-key.png" lightbox="./media/annotations/6-create-api-key.png" alt-text="Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key.":::
1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key. 1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab.
- ![Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value.](./media/annotations/7-paste-api-key.png)
+ :::image type="content" source="./media/annotations/7-paste-api-key.png" lightbox="./media/annotations/7-paste-api-key.png" alt-text="Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value.":::
1. Select **Save** in the main release template window to save the template.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Telemetry items reported within a scope of operation become children of such an
In **Search**, the operation context is used to create the **Related Items** list.
-![Screenshot that shows the Related Items list.](./media/api-custom-events-metrics/21.png)
For more information on custom operations tracking, see [Track custom operations with Application Insights .NET SDK](./custom-operations-tracking.md).
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
Last updated 03/22/2023
# Application Insights overview
-Application Insights is an extension of [Azure Monitor](../overview.md) and provides Application Performance Monitoring (also known as ΓÇ£APMΓÇ¥) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways:
+Application Insights is an extension of [Azure Monitor](../overview.md) and provides application performance monitoring (APM) features. APM tools are useful to monitor applications from development, through test, and into production in the following ways:
-1. *Proactively* understand how an application is performing.
-1. *Reactively* review application execution data to determine the cause of an incident.
+- *Proactively* understand how an application is performing.
+- *Reactively* review application execution data to determine the cause of an incident.
-In addition to collecting [Metrics](standard-metrics.md) and application [Telemetry](data-model-complete.md) data, which describe application activities and health, Application Insights can also be used to collect and store application [trace logging data](asp-net-trace-logs.md).
+Along with collecting [metrics](standard-metrics.md) and application [telemetry](data-model-complete.md) data, which describe application activities and health, you can use Application Insights to collect and store application [trace logging data](asp-net-trace-logs.md).
-The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs; the logging framework rarely needs to be changed.
+The [log trace](asp-net-trace-logs.md) is associated with other telemetry to give a detailed view of the activity. Adding trace logging to existing apps only requires providing a destination for the logs. You rarely need to change the logging framework.
Application Insights provides other features including, but not limited to: -- [Live Metrics](live-stream.md) ΓÇô observe activity from your deployed application in real time with no effect on the host environment-- [Availability](availability-overview.md) ΓÇô also known as ΓÇ£Synthetic Transaction MonitoringΓÇ¥, probe your applications external endpoint(s) to test the overall availability and responsiveness over time-- [GitHub or Azure DevOps integration](work-item-integration.md) ΓÇô create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in context of Application Insights data-- [Usage](usage-overview.md) ΓÇô understand which features are popular with users and how users interact and use your application-- [Smart Detection](proactive-diagnostics.md) ΓÇô automatic failure and anomaly detection through proactive telemetry analysis
+- [Live Metrics](live-stream.md): Observe activity from your deployed application in real time with no effect on the host environment.
+- [Availability](availability-overview.md): Also known as synthetic transaction monitoring. Probe the external endpoints of your applications to test the overall availability and responsiveness over time.
+- [GitHub or Azure DevOps integration](work-item-integration.md): Create [GitHub](/training/paths/github-administration-products/) or [Azure DevOps](/azure/devops/) work items in the context of Application Insights data.
+- [Usage](usage-overview.md): Understand which features are popular with users and how users interact and use your application.
+- [Smart detection](proactive-diagnostics.md): Detect failures and anomalies automatically through proactive telemetry analysis.
-In addition, Application Insights supports [Distributed Tracing](distributed-tracing.md), also known as ΓÇ£distributed component correlationΓÇ¥. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a given execution or transaction. The ability to trace activity end-to-end is increasingly important for applications that have been built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices).
+Application Insights supports [distributed tracing](distributed-tracing.md), which is also known as distributed component correlation. This feature allows [searching for](diagnostic-search.md) and [visualizing](transaction-diagnostics.md) an end-to-end flow of a specific execution or transaction. The ability to trace activity from end to end is important for applications that were built as distributed components or [microservices](/azure/architecture/guide/architecture-styles/microservices).
-The [Application Map](app-map.md) allows a high level top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
+The [Application Map](app-map.md) allows a high-level, top-down view of the application architecture and at-a-glance visual references to component health and responsiveness.
-To understand the number of Application Insights resources required to cover your Application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md).
+To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md).
## How do I use Application Insights?
-Application Insights is enabled through either [Auto-Instrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported and the applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, reference [How do I instrument an application?](#how-do-i-instrument-an-application).
+Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) to your application code. [Many languages](platforms.md) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application).
-The Application Insights agent or SDK pre-processes telemetry and metrics before sending the data to Azure where it's ingested and processed further before being stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
+The Application Insights agent or SDK preprocesses telemetry and metrics before sending the data to Azure. Then it's ingested and processed further before it's stored in Azure Monitor Logs (Log Analytics). For this reason, an Azure account is required to use Application Insights.
-The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [Dashboards](overview-dashboard.md) and [Workbooks](../visualize/workbooks-overview.md).
+The easiest way to get started consuming Application insights is through the Azure portal and the built-in visual experiences. Advanced users can [query the underlying data](../logs/log-query-overview.md) directly to [build custom visualizations](tutorial-app-dashboards.md) through Azure Monitor [dashboards](overview-dashboard.md) and [workbooks](../visualize/workbooks-overview.md).
-Consider starting with the [Application Map](app-map.md) for a high level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time, or search within data (for example Log Traces) and filter to a given correlated operation of interest.
+Consider starting with the [Application Map](app-map.md) for a high-level view. Use the [Search](diagnostic-search.md) experience to quickly narrow down telemetry and data by type and date-time. Or you can search within data (for example, with Log Traces) and filter to a given correlated operation of interest.
-Jump into analytics with [Performance view](tutorial-performance.md) ΓÇô get deep insights into how your Application or API and downstream dependencies are performing and find for a representative sample to [explore end to end](transaction-diagnostics.md). And, be proactive with the [Failure view](tutorial-runtime-exceptions.md) ΓÇô understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause-analysis.
+Two views are especially useful:
-[Create Azure Monitor Alerts](tutorial-alert.md) to signal potential issues should your Application or components parts deviate from the established baseline.
+- [Performance view](tutorial-performance.md): Get deep insights into how your application or API and downstream dependencies are performing. You can also find a representative sample to [explore end to end](transaction-diagnostics.md).
+- [Failure view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis.
-Application Insights pricing is consumption-based; you pay for only what you use. For more information on pricing, see the [Azure Monitor Pricing page](https://azure.microsoft.com/pricing/details/monitor/) and [how to optimize costs](../best-practices-cost.md).
+[Create Azure Monitor alerts](tutorial-alert.md) to signal potential issues in case your application or components parts deviate from the established baseline.
+
+Application Insights pricing is based on consumption. You only pay for what you use. For more information on pricing, see:
+
+- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/)
+- [Optimize costs in Azure Monitor](../best-practices-cost.md)
## How do I instrument an application?
-[Auto-Instrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code.
+[Autoinstrumentation](codeless-overview.md) is the preferred instrumentation method. It requires no developer investment and eliminates future overhead related to [updating the SDK](sdk-support-guidance.md). It's also the only way to instrument an application in which you don't have access to the source code.
-You only need to install the Application Insights SDK in the following circumstances:
+You only need to install the Application Insights SDK if:
-- You require [custom events and metrics](api-custom-events-metrics.md)-- You require control over the flow of telemetry-- [Auto-Instrumentation](codeless-overview.md) isn't available (typically due to language or platform limitations)
+- You require [custom events and metrics](api-custom-events-metrics.md).
+- You require control over the flow of telemetry.
+- [Autoinstrumentation](codeless-overview.md) isn't available, typically because of language or platform limitations.
-To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the web pages. The app and its components don't have to be hosted in Azure. The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small; tracking calls are non-blocking and batched to be sent in a separate thread.
+To use the SDK, you install a small instrumentation package in your app and then instrument the web app, any background components, and JavaScript within the webpages. The app and its components don't have to be hosted in Azure.
+
+The instrumentation monitors your app and directs the telemetry data to an Application Insights resource by using a unique token. The effect on your app's performance is small. Tracking calls are nonblocking and batched to be sent in a separate thread.
### [.NET](#tab/net)
-Integrated Auto-instrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
+Integrated autoinstrumentation is available for [Azure App Service .NET](azure-web-apps-net.md), [Azure App Service .NET Core](azure-web-apps-net-core.md), [Azure Functions](../../azure-functions/functions-monitoring.md), and [Azure Virtual Machines](azure-vm-vmss-apps.md).
-[Azure Monitor Application Insights Agent](application-insights-asp-net-agent.md) is available for workloads running in on-premises virtual machines.
+The [Azure Monitor Application Insights agent](application-insights-asp-net-agent.md) is available for workloads running in on-premises virtual machines.
-A detailed view of all Auto-instrumentation supported environments, languages, and resource providers are available [here](codeless-overview.md#supported-environments-languages-and-resource-providers).
+For a detailed view of all autoinstrumentation supported environments, languages, and resource providers, see [What is autoinstrumentation for Azure Monitor Application Insights?](codeless-overview.md#supported-environments-languages-and-resource-providers).
For other scenarios, the [Application Insights SDK](/dotnet/api/overview/azure/insights) is required.
-A preview [Open Telemetry](opentelemetry-enable.md?tabs=net) offering is also available.
+A preview [OpenTelemetry](opentelemetry-enable.md?tabs=net) offering is also available.
### [Java](#tab/java)
-Integrated Auto-Instrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md).
+Integrated autoinstrumentation is available for Java Apps hosted on [Azure App Service](azure-web-apps-java.md) and [Azure Functions](monitor-functions.md).
-Auto-instrumentation is available for any environment using [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java).
+Autoinstrumentation is available for any environment by using [Azure Monitor OpenTelemetry-based autoinstrumentation for Java applications](opentelemetry-enable.md?tabs=java).
### [Node.js](#tab/nodejs)
-Auto-instrumentation is available for [Azure App Service](azure-web-apps-nodejs.md).
+Autoinstrumentation is available for [Azure App Service](azure-web-apps-nodejs.md).
-The [Application Insights SDK](nodejs.md) is an alternative and we also have a preview [Open Telemetry](opentelemetry-enable.md?tabs=nodejs) offering available.
+The [Application Insights SDK](nodejs.md) is an alternative. We also have a preview [OpenTelemetry](opentelemetry-enable.md?tabs=nodejs) offering available.
### [JavaScript](#tab/javascript)
JavaScript requires the [Application Insights SDK](javascript.md).
### [Python](#tab/python)
-Python applications can be monitored using [OpenCensus Python SDK via the Azure Monitor exporters](opencensus-python.md).
+Python applications can be monitored by using [OpenCensus Python SDK via the Azure Monitor exporters](opencensus-python.md).
An extension is available for monitoring [Azure Functions](opencensus-python.md#integrate-with-azure-functions).
-A preview [Open Telemetry](opentelemetry-enable.md?tabs=python) offering is also available.
+A preview [OpenTelemetry](opentelemetry-enable.md?tabs=python) offering is also available.
This section lists all supported platforms and frameworks.
* [Azure Spring Apps](../../spring-apps/how-to-application-insights.md) * [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles
-#### Auto-instrumentation (enable without code changes)
-* [ASP.NET - for web apps hosted with IIS](./application-insights-asp-net-agent.md)
-* [ASP.NET Core - for web apps hosted with IIS](./application-insights-asp-net-agent.md)
+#### Autoinstrumentation (enable without code changes)
+* [ASP.NET: For web apps hosted with IIS](./application-insights-asp-net-agent.md)
+* [ASP.NET Core: For web apps hosted with IIS](./application-insights-asp-net-agent.md)
* [Java](./opentelemetry-enable.md?tabs=java)
-#### Manual instrumentation / SDK (some code changes required)
+#### Manual instrumentation/SDK (some code changes required)
* [ASP.NET](./asp-net.md) * [ASP.NET Core](./asp-net-core.md) * [Node.js](./nodejs.md) * [Python](./opencensus-python.md)
-* [JavaScript - web](./javascript.md)
+* [JavaScript: Web](./javascript.md)
* [React](./javascript-framework-extensions.md) * [React Native](./javascript-framework-extensions.md) * [Angular](./javascript-framework-extensions.md)
This section lists all supported platforms and frameworks.
* [Power BI for workspace-based resources](../logs/log-powerbi.md) ### Unsupported SDKs
-Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when you use the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
+Several other community-supported Application Insights SDKs exist. Azure Monitor only provides support when you use the supported instrumentation options listed in this article.
+
+We're constantly assessing opportunities to expand our support for other languages. For the latest SDK news, see [Azure updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights).
Post general questions to the Microsoft Q&A [answers forum](/answers/topics/2422
### Stack Overflow
-Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-application-insights) using an Application Insights tag.
+Post coding questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-application-insights) by using an Application Insights tag.
-### User Voice
+### Feedback Community
-Leave product feedback for the engineering team on [UserVoice](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
+Leave product feedback for the engineering team in the [Feedback Community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0).
## Next steps - [Create a resource](create-workspace-resource.md)-- [Auto-instrumentation overview](codeless-overview.md)
+- [Autoinstrumentation overview](codeless-overview.md)
- [Overview dashboard](overview-dashboard.md) - [Availability overview](availability-overview.md) - [Application Map](app-map.md)
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
When you select **Update map components**, the map is refreshed with all compone
If all the components are roles within a single Application Insights resource, this discovery step isn't required. The initial load for such an application will have all its components.
-![Screenshot that shows an example of an application map.](media/app-map/app-map-001.png)
One of the key objectives with this experience is to be able to visualize complex topologies with hundreds of components. Select any component to see related insights and go to the performance and failure triage experience for that component.
-![Diagram that shows application map details.](media/app-map/application-map-002.png)
### Investigate failures Select **Investigate failures** to open the **Failures** pane.
-![Screenshot that shows the Investigate failures button.](media/app-map/investigate-failures.png)
-![Screenshot that shows the Failures screen.](media/app-map/failures.png)
### Investigate performance To troubleshoot performance problems, select **Investigate performance**.
-![Screenshot that shows the Investigate performance button.](media/app-map/investigate-performance.png)
-![Screenshot that shows the Performance screen.](media/app-map/performance.png)
### Go to details The **Go to details** button displays the end-to-end transaction experience, which offers views at the call stack level.
-![Screenshot that shows the Go to details button.](media/app-map/go-to-details.png)
-![Screenshot that shows the End-to-end transaction details screen.](media/app-map/end-to-end-transaction.png)
### View in Logs (Analytics) To query and investigate your applications data further, select **View in Logs (Analytics)**.
-![Screenshot that shows the View in Logs (Analytics) button.](media/app-map/view-logs.png)
-![Screenshot that shows the Logs screen with a line graph that summarizes the average response duration of a request over the past 12 hours.](media/app-map/log-analytics.png)
### Alerts To view active alerts and the underlying rules that cause the alerts to be triggered, select **Alerts**.
-![Screenshot that shows the Alerts button.](media/app-map/alerts.png)
-![Screenshot that shows a list of alerts.](media/app-map/alerts-view.png)
## Set or override cloud role name
exporter.add_telemetry_processor(callback_function)
To help you understand the concept of *cloud role names*, look at an application map that has multiple cloud role names present.
-![Screenshot that shows an application map example.](media/app-map/cloud-rolename.png)
In the application map shown, each of the names in green boxes is a cloud role name value for different aspects of this particular distributed application. For this app, its roles consist of `Authentication`, `acmefrontend`, `Inventory Management`, and `Payment Processing Worker Role`.
Enable **Intelligent view** only for a single Application Insights resource.
To provide feedback, use the feedback option.
-![Screenshot that shows the Feedback option.](./media/app-map/14-updated.png)
## Next steps
azure-monitor Application Insights Asp Net Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/application-insights-asp-net-agent.md
For a complete list of supported auto-instrumentation scenarios, see [Supported
Application Insights Agent is located in the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.ApplicationMonitor).
-![PowerShell Gallery icon.](https://img.shields.io/powershellgallery/v/Az.ApplicationMonitor.svg?color=Blue&label=Current%20Version&logo=PowerShell&style=for-the-badge)
## Instructions - To get started with concise code samples, see the **Getting started** tab.
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Select the **Performance** tab on the left and select the **Dependencies** tab a
Select a **Dependency Name** under **Overall**. After you select a dependency, a graph of that dependency's distribution of durations appears on the right.
-![Screenshot that shows the Dependencies tab open to select a Dependency Name in the chart.](./media/asp-net-dependencies/2-perf-dependencies.png)
Select the **Samples** button at the bottom right. Then select a sample to see the end-to-end transaction details.
-![Screenshot that shows selecting a sample to see the end-to-end transaction details.](./media/asp-net-dependencies/3-end-to-end.png)
### Profile your live site
Failed requests might also be associated with failed calls to dependencies.
Select the **Failures** tab on the left and then select the **Dependencies** tab at the top.
-![Screenshot that shows selecting the failed requests chart.](./media/asp-net-dependencies/4-fail.png)
Here you'll see the failed dependency count. To get more information about a failed occurrence, select a **Dependency Name** in the bottom table. Select the **Dependencies** button at the bottom right to see the end-to-end transaction details.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-exceptions.md
Open the app solution in Visual Studio. Run the app, either on your server or on
Open the **Application Insights Search** telemetry window in Visual Studio. While debugging, select the **Application Insights** dropdown box.
-![Screenshot that shows right-clicking the project and choosing Application Insights.](./media/asp-net-exceptions/34.png)
Select an exception report to show its stack trace. To open the relevant code file, select a line reference in the stack trace. If CodeLens is enabled, you'll see data about the exceptions:
-![Screenshot that shows CodeLens notification of exceptions.](./media/asp-net-exceptions/35.png)
## Diagnose failures using the Azure portal
Application Insights comes with a curated Application Performance Management exp
You'll see the failure rate trends for your requests, how many of them are failing, and how many users are affected. The **Overall** view shows some of the most useful distributions specific to the selected failing operation. You'll see the top three response codes, the top three exception types, and the top three failing dependency types.
-![Screenshot that shows a failures triage view on the Operations tab.](./media/asp-net-exceptions/failures0719.png)
To review representative samples for each of these subsets of operations, select the corresponding link. As an example, to diagnose exceptions, you can select the count of a particular exception to be presented with the **End-to-end transaction details** tab.
-![Screenshot that shows the End-to-end transaction details tab.](./media/asp-net-exceptions/end-to-end.png)
Alternatively, instead of looking at exceptions of a specific failing operation, you can start from the **Overall** view of exceptions by switching to the **Exceptions** tab at the top. Here you can see all the exceptions collected for your monitored app.
Using the <xref:Microsoft.VisualStudio.ApplicationInsights.TelemetryClient?displ
To see these events, on the left menu, open [Search](./diagnostic-search.md). Select the dropdown menu **Event types**, and then choose **Custom Event**, **Trace**, or **Exception**.
-![Screenshot that shows the Search screen.](./media/asp-net-exceptions/customevents.png)
> [!NOTE] > If your app generates a lot of telemetry, the adaptive sampling module will automatically reduce the volume that's sent to the portal by sending only a representative fraction of events. Events that are part of the same operation will be selected or deselected as a group so that you can navigate between related events. For more information, see [Sampling in Application Insights](./sampling.md).
azure-monitor Asp Net Trace Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md
Use this method if your project type isn't supported by the Application Insights
1. Select one of the following packages: - **ILogger**: [Microsoft.Extensions.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
-[![NuGet iLogger banner](https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg)](https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.Extensions.Logging.ApplicationInsights.svg" alt-text="NuGet iLogger banner":::
- **NLog**: [Microsoft.ApplicationInsights.NLogTarget](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
-[![NuGet NLog banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.NLogTarget/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.NLogTarget.svg" alt-text="NuGet NLog banner":::
- **log4net**: [Microsoft.ApplicationInsights.Log4NetAppender](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
-[![NuGet Log4Net banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.Log4NetAppender/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.Log4NetAppender.svg" alt-text="NuGet Log4Net banner":::
- **System.Diagnostics**: [Microsoft.ApplicationInsights.TraceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
-[![NuGet System.Diagnostics banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.TraceListener/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.TraceListener.svg" alt-text="NuGet System.Diagnostics banner":::
- [Microsoft.ApplicationInsights.DiagnosticSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)
-[![NuGet Diagnostic Source Listener banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.DiagnosticSourceListener.svg" alt-text="NuGet Diagnostic Source Listener banner":::
- [Microsoft.ApplicationInsights.EtwCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector/)
-[![NuGet Etw Collector banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EtwCollector.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EtwCollector/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EtwCollector.svg" alt-text="NuGet Etw Collector banner":::
- [Microsoft.ApplicationInsights.EventSourceListener](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/)
-[![NuGet Event Source Listener banner](https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EventSourceListener.svg)](https://www.nuget.org/packages/Microsoft.ApplicationInsights.EventSourceListener/)
+[:::image type="content" source="https://img.shields.io/nuget/vpre/Microsoft.ApplicationInsights.EventSourceListener.svg" alt-text="NuGet Event Source Listener banner":::
The NuGet package installs the necessary assemblies and modifies web.config or app.config if that's applicable.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
With continuous monitoring, release pipelines can incorporate monitoring data fr
1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**.
- ![Screenshot that shows a new Azure Pipelines release pipeline.](media/continuous-monitoring/001.png)
+ :::image type="content" source="media/continuous-monitoring/001.png" lightbox="media/continuous-monitoring/001.png" alt-text="Screenshot that shows a new Azure Pipelines release pipeline.":::
1. In the **Stage 1** box, select the hyperlink to **View stage tasks.**
- ![Screenshot that shows View stage tasks.](media/continuous-monitoring/002.png)
+ :::image type="content" source="media/continuous-monitoring/002.png" lightbox="media/continuous-monitoring/002.png" alt-text="Screenshot that shows View stage tasks.":::
1. In the **Stage 1** configuration pane, fill in the following fields:
To add deployment gates:
1. On the main pipeline page, under **Stages**, select the **Pre-deployment conditions** or **Post-deployment conditions** symbol, depending on which stage needs a continuous monitoring gate.
- ![Screenshot that shows Pre-deployment conditions.](media/continuous-monitoring/004.png)
+ :::image type="content" source="media/continuous-monitoring/004.png" lightbox="media/continuous-monitoring/004.png" alt-text="Screenshot that shows Pre-deployment conditions.":::
1. In the **Pre-deployment conditions** configuration pane, set **Gates** to **Enabled**.
To add deployment gates:
1. Select **Query Azure Monitor alerts** from the dropdown menu. This option lets you access both Azure Monitor and Application Insights alerts.
- ![Screenshot that shows Query Azure Monitor alerts.](media/continuous-monitoring/005.png)
+ :::image type="content" source="media/continuous-monitoring/005.png" lightbox="media/continuous-monitoring/005.png" alt-text="Screenshot that shows Query Azure Monitor alerts.":::
1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**.
You can see deployment gate behavior and other release steps in the release logs
1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**.
- ![Screenshot that shows viewing release logs.](media/continuous-monitoring/006.png)
+ :::image type="content" source="media/continuous-monitoring/006.png" lightbox="media/continuous-monitoring/006.png" alt-text="Screenshot that shows viewing release logs.":::
## Next steps
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
To migrate a classic Application Insights resource to a workspace-based resource
1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left.
- ![Screenshot that shows Properties under the Configure heading.](./media/convert-classic-resource/properties.png)
+ :::image type="content" source="./media/convert-classic-resource/properties.png" lightbox="./media/convert-classic-resource/properties.png" alt-text="Screenshot that shows Properties under the Configure heading.":::
1. Select **Migrate to Workspace-based**.
- ![Screenshot that shows the Migrate to Workspace-based button.](./media/convert-classic-resource/migrate.png)
+ :::image type="content" source="./media/convert-classic-resource/migrate.png" lightbox="./media/convert-classic-resource/migrate.png" alt-text="Screenshot that shows the Migrate to Workspace-based button.":::
1. Select the Log Analytics workspace where you want all future ingested Application Insights telemetry to be stored. It can either be a Log Analytics workspace in the same subscription or a different subscription that shares the same Azure Active Directory tenant. The Log Analytics workspace doesn't have to be in the same resource group as the Application Insights resource. > [!NOTE] > Migrating to a workspace-based resource can take up to 24 hours, but the process is usually faster. Rely on accessing data through your Application Insights resource while you wait for the migration process to finish. After it's finished, you'll see new data stored in the Log Analytics workspace tables.
- ![Screenshot that shows the Migration wizard UI with the option to select target workspace.](./media/convert-classic-resource/migration.png)
+ :::image type="content" source="./media/convert-classic-resource/migration.png" lightbox="./media/convert-classic-resource/migration.png" alt-text="Screenshot that shows the Migration wizard UI with the option to select target workspace.":::
After your resource is migrated, you'll see the corresponding workspace information in the **Overview** pane.
- ![Screenshot that shows the Workspace name.](./media/create-workspace-resource/workspace-name.png)
+ :::image type="content" source="./media/create-workspace-resource/workspace-name.png" lightbox="./media/create-workspace-resource/workspace-name.png" alt-text="Screenshot that shows the Workspace name.":::
Selecting the blue link text takes you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
The legacy **Continuous export** functionality isn't supported for workspace-bas
1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**.
- ![Screenshot that shows the Continuous export menu item.](./media/convert-classic-resource/continuous-export.png)
+ :::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item.":::
1. Select **Disable**.
- ![Screenshot that shows the Continuous export Disable button.](./media/convert-classic-resource/disable.png)
+ :::image type="content" source="./media/convert-classic-resource/disable.png" lightbox="./media/convert-classic-resource/disable.png" alt-text="Screenshot that shows the Continuous export Disable button.":::
- After you select **Disable**, you can go back to the migration UI. If the **Edit continuous export** page prompts you that your settings aren't saved, select **OK**. This prompt doesn't pertain to disabling or enabling continuous export.
The structure of a Log Analytics workspace is described in [Log Analytics worksp
> [!NOTE] > The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](#table-structure), first go to your Log Analytics workspace. During the preview, selecting **Logs** in the Application Insights pane gives you access to the classic Application Insights query experience. For more information, see [Query scope](../logs/scope.md).
-[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](../logs/media/data-platform-logs/logs-structure-ai.png)](../logs/media/data-platform-logs/logs-structure-ai.png#lightbox)
+[:::image type="content" source="../logs/media/data-platform-logs/logs-structure-ai.png" lightbox="../logs/media/data-platform-logs/logs-structure-ai.png" alt-text="Diagram that shows the Azure Monitor Logs structure for Application Insights.":::
### Table structure
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
Title: Create a new Azure Monitor Application Insights workspace-based resource description: Learn about the steps required to enable the new Azure Monitor Application Insights workspace-based resources. Previously updated : 11/14/2022 Last updated : 04/12/2023 # Workspace-based Application Insights resources
-Workspace-based resources support full integration between Application Insights and Log Analytics. Now you can send your Application Insights telemetry to a common Log Analytics workspace. You'll have full access to all the features of Log Analytics, while your application, infrastructure, and platform logs remain in a single consolidated location.
+[Azure Monitor](../overview.md) [Application Insights](app-insights-overview.md#application-insights-overview) workspace-based resources integrate [Application Insights](app-insights-overview.md#application-insights-overview) and [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor).
-This integration allows for common Azure role-based access control across your resources. It also eliminates the need for cross-app/workspace queries.
+With workspace-based resources, [Application Insights](app-insights-overview.md#application-insights-overview) sends telemetry to a common [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor) workspace, providing full access to all the features of [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor) while keeping your application, infrastructure, and platform logs in a single consolidated location. This integration allows for common [Azure role-based access control](../roles-permissions-security.md) across your resources and eliminates the need for cross-app/workspace queries.
> [!NOTE] > Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. To learn more about billing for workspace-based Application Insights resources, see [Azure Monitor Logs pricing details](../logs/cost-logs.md). - ## New capabilities With workspace-based Application Insights, you can take advantage of the latest capabilities of Azure Monitor and Log Analytics. For example: * [Customer-managed key](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access. * [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints.
-* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
+* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) allows you to control this data associated with Application Insights [Profiler](../profiler/profiler-overview.md) and [Snapshot Debugger](../snapshot-debugger/snapshot-debugger.md).
+ * Encryption-at-rest policy
+ * Lifetime management policy
+ * Network access
* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price. * Log Analytics streaming ingests data faster.
With workspace-based Application Insights, you can take advantage of the latest
Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource. > [!div class="mx-imgBorder"]
-> ![Screenshot that shows a workspace-based Application Insights resource.](./media/create-workspace-resource/create-workspace-based.png)
+> :::image type="content" source="./media/create-workspace-resource/create-workspace-based.png" lightbox="./media/create-workspace-resource/create-workspace-based.png" alt-text="Screenshot that shows a workspace-based Application Insights resource.":::
If you don't have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
-*Workspace-based resources are currently available in all commercial regions and Azure Government.*
+*Workspace-based resources are currently available in all commercial regions and Azure Government. Having Application Insights and Log Analytics in two different regions can impact latency and reduce overall reliability of the monitoring solution. *
After you create your resource, you'll see corresponding workspace information in the **Overview** pane.
-![Screenshot that shows a workspace name.](./media/create-workspace-resource/workspace-name.png)
Select the blue link text to go to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
Select the blue link text to go to the associated Log Analytics workspace where
## Copy the connection string
-The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. You can also use it to modify the endpoints your resource will use as a destination for your telemetry. You must copy the connection string and add it to your application's code or to an environment variable.
+The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. You can also use it to modify the endpoints your resource uses as a destination for your telemetry. You must copy the connection string and add it to your application's code or to an environment variable.
## Configure monitoring
After you've created a workspace-based Application Insights resource, you config
### Code-based application monitoring
-For code-based application monitoring, you install the appropriate Application Insights SDK and point the instrumentation key or connection string to your newly created resource.
+For code-based application monitoring, you install the appropriate Application Insights SDK and point the connection string to your newly created resource.
For information on how to set up an Application Insights SDK for code-based monitoring, see the following documentation specific to the language or framework:
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'`.
+If you don't run the `az extension add` command, you see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'`.
Now you can run the following code to create your Application Insights resource:
New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location <
New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus -WorkspaceResourceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ```
-For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
+For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the connection string, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
### Azure Resource Manager templates
azure-monitor Custom Data Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-data-correlation.md
- Title: Azure Application Insights | Microsoft Docs
-description: Correlate data from Application Insights to other datasets, such as data enrichment or lookup tables, non-Application Insights data sources, and custom data.
- Previously updated : 08/08/2018---
-# Correlating Application Insights data with custom data sources
-
-Application Insights collects several different data types: exceptions, traces, page views, and others. While this is often sufficient to investigate your application's performance, reliability, and usage, there are cases when it is useful to correlate the data stored in Application Insights to other completely custom datasets.
-
-Some situations where you might want custom data include:
--- Data enrichment or lookup tables: for example, supplement a server name with the owner of the server and the lab location in which it can be found -- Correlation with non-Application Insights data sources: for example, correlate data about a purchase on a web-store with information from your purchase-fulfillment service to determine how accurate your shipping time estimates were -- Completely custom data: many of our customers love the query language and performance of the Azure Monitor log platform that backs Application Insights, and want to use it to query data that is not at all related to Application Insights. For example, to track the solar panel performance as part of a smart home installation as outlined [here](https://www.catapultsystems.com/blogs/using-log-analytics-and-a-special-guest-to-forecast-electricity-generation/).-
-## How to correlate custom data with Application Insights data
-
-Since Application Insights is backed by the powerful Azure Monitor log platform, we are able to use the full power of Azure Monitor to ingest the data. Then, we will write queries using the "join" operator that will correlate this custom data with the data available to us in Azure Monitor logs.
-
-## Ingesting data
-
-In this section, we will review how to get your data into Azure Monitor logs.
-
-If you don't already have one, provision a new Log Analytics workspace by following [these instructions](../vm/monitor-virtual-machine.md) through and including the "create a workspace" step.
-
-To start sending log data into Azure Monitor. Several options exist:
--- For a synchronous mechanism, you can either directly call the [data collector API](../logs/data-collector-api.md) or use our Logic App connector ΓÇô simply look for "Azure Log Analytics" and pick the "Send Data" option:-
- ![Screenshot choose and action](./media/custom-data-correlation/01-logic-app-connector.png)
--- For an asynchronous option, use the Data Collector API to build a processing pipeline. See [this article](../logs/create-pipeline-datacollector-api.md) for details.-
-## Correlating data
-
-Application Insights is based on the Azure Monitor log platform. We can therefore use [cross-resource joins](../logs/cross-workspace-query.md) to correlate any data we ingested into Azure Monitor with our Application Insights data.
-
-For example, we can ingest our lab inventory and locations into a table called "LabLocations_CL" in a Log Analytics workspace called "myLA". If we then wanted to review our requests tracked in Application Insights app called "myAI" and correlate the machine names that served the requests to the locations of these machines stored in the previously mentioned custom table, we can run the following query from either the Application Insights or Azure Monitor context:
-
-```
-app('myAI').requests
-| join kind= leftouter (
- workspace('myLA').LabLocations_CL
- | project Computer_S, Owner_S, Lab_S
-) on $left.cloud_RoleInstance == $right.Computer
-```
-
-## Next Steps
--- Check out the [Data Collector API](../logs/data-collector-api.md) reference.-- For more information on [cross-resource joins](../logs/cross-workspace-query.md).
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
Data collected by Application Insights models this typical application execution pattern.
-![Diagram that shows an Application Insights telemetry data model.](./media/data-model-complete/application-insights-data-model.png)
The following types of telemetry are used to monitor the execution of your app. The Application Insights SDK from the web application framework automatically collects these three types:
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
For more information, see the section [Data sent by Application Insights](#data-
If you're developing an app using Visual Studio, run the app in debug mode (F5). The telemetry appears in the **Output** window. From there, you can copy it and format it as JSON for easy inspection.
-![Screenshot that shows running the app in debug mode in Visual Studio.](./media/data-retention-privacy/06-vs.png)
There's also a more readable view in the **Diagnostics** window. For webpages, open your browser's debugging window. Select F12 and open the **Network** tab.
-![Screenshot that shows the open Network tab.](./media/data-retention-privacy/08-browser.png)
### Can I write code to filter the telemetry before it's sent?
azure-monitor Diagnostic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/diagnostic-search.md
You can find **Search** in the Azure portal or Visual Studio.
You can open transaction search from the Application Insights **Overview** tab of your application. You can also select **Search** under **Investigate** on the left menu.
-![Screenshot that shows the Search tab.](./media/diagnostic-search/view-custom-events.png)
Go to the **Event types** dropdown menu to see a list of telemetry items such as server requests, page views, and custom events that you've coded. At the top of the **Results** list is a summary chart showing counts of events over time.
In Visual Studio, there's also an **Application Insights Search** window. It's m
Open the **Application Insights Search** window in Visual Studio:
-![Screenshot that shows Visual Studio open to Application Insights Search.](./media/diagnostic-search/32.png)
The **Application Insights Search** window has features similar to the web portal:
-![Screenshot that shows Visual Studio Application Insights Search window.](./media/diagnostic-search/34.png)
The **Track Operation** tab is available when you open a request or a page view. An "operation" is a sequence of events that's associated with a single request or page view. For example, dependency calls, exceptions, trace logs, and custom events might be part of a single operation. The **Track Operation** tab shows graphically the timing and duration of these events in relation to the request or page view.
The **Track Operation** tab is available when you open a request or a page view.
Select any telemetry item to see key fields and related items.
-![Screenshot that shows an individual dependency request.](./media/diagnostic-search/telemetry-item.png)
The end-to-end transaction details view opens.
The event types are:
## Filter on property values
-You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** ![Filter icon](./media/diagnostic-search/filter-icon.png) to start.
+You can filter events on the values of their properties. The available properties depend on the event types you selected. Select **Filter** :::image type="content" source="./media/diagnostic-search/filter-icon.png" lightbox="./media/diagnostic-search/filter-icon.png" alt-text="Filter icon"::: to start.
Choosing no values of a particular property has the same effect as choosing all values. It switches off filtering on that property.
Notice that the counts to the right of the filter values show how many occurrenc
To find all the items with the same property value, either enter it in the **Search** box or select the checkbox when you look through properties on the **Filter** tab.
-![Screenshot that shows selecting the checkbox of a property on the Filter tab.](./media/diagnostic-search/filter-property.png)
## Search the data
You can search for terms in any of the property values. This capability is usefu
You might want to set a time range because searches over a shorter range are faster.
-![Screenshot that shows opening a diagnostic search.](./media/diagnostic-search/search-property.png)
Search for complete words, not substrings. Use quotation marks to enclose special characters.
You can create a bug in GitHub or Azure DevOps with the details from any telemet
Go to the end-to-end transaction detail view by selecting any telemetry item. Then select **Create work item**.
-![Screenshot that shows Create work item.](./media/diagnostic-search/work-item.png)
The first time you do this step, you're asked to configure a link to your Azure DevOps organization and project. You can also configure the link on the **Work Items** tab.
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
By looking at the [Trace-Context header format](https://www.w3.org/TR/trace-cont
If you look at the request entry that was sent to Azure Monitor, you can see fields populated with the trace header information. You can find the data under **Logs (Analytics)** in the Azure Monitor Application Insights resource.
-![Screenshot that shows Request telemetry in Logs (Analytics).](./media/opencensus-python/0011-correlation.png)
The `id` field is in the format `<trace-id>.<span-id>`, where `trace-id` is taken from the trace header that was passed in the request and `span-id` is a generated 8-byte array for this span.
azure-monitor Eventcounters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md
changed as shown in the example below.
To view EventCounter metrics in [Metric Explorer](../essentials/metrics-charts.md), select Application Insights resource, and chose Log-based metrics as metric namespace. Then EventCounter metrics get displayed under Custom category. > [!div class="mx-imgBorder"]
-> ![Event counters reported in Application Insights Metric Explorer](./media/event-counters/metrics-explorer-counter-list.png)
+> :::image type="content" source="./media/event-counters/metrics-explorer-counter-list.png" lightbox="./media/event-counters/metrics-explorer-counter-list.png" alt-text="Event counters reported in Application Insights Metric Explorer":::
## Event counters in Analytics
customMetrics | summarize avg(value) by name
``` > [!div class="mx-imgBorder"]
-> ![Event counters reported in Application Insights Analytics](./media/event-counters/analytics-event-counters.png)
+> :::image type="content" source="./media/event-counters/analytics-event-counters.png" lightbox="./media/event-counters/analytics-event-counters.png" alt-text="Event counters reported in Application Insights Analytics":::
To get a chart of a specific counter (for example: `ThreadPool Completed Work Item Count`) over the recent period, run the following query.
customMetrics
| render timechart ``` > [!div class="mx-imgBorder"]
-> ![Chat of a single counter in Application Insights](./media/event-counters/analytics-completeditems-counters.png)
+> :::image type="content" source="./media/event-counters/analytics-completeditems-counters.png" lightbox="./media/event-counters/analytics-completeditems-counters.png" alt-text="Chat of a single counter in Application Insights":::
Like other telemetry, **customMetrics** also has a column `cloud_RoleInstance` that indicates the identity of the host server instance on which your app is running. The above query shows the counter value per instance, and can be used to compare performance of different server instances.
azure-monitor Get Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md
This single telemetry item represents an aggregate of 41 distinct metric measure
If we examine our Application Insights resource in the **Logs (Analytics)** experience, the individual telemetry item would look like the following screenshot.
-![Screenshot that shows the Log Analytics query view.](./media/get-metric/log-analytics.png)
> [!NOTE] > While the raw telemetry item didn't contain an explicit sum property/field once ingested, we create one for you. In this case, both the `value` and `valueSum` property represent the same thing. You can also access your custom metric telemetry in the [_Metrics_](../essentials/metrics-charts.md) section of the portal as both a [log-based and custom metric](pre-aggregated-metrics-log-metrics.md). The following screenshot is an example of a log-based metric.
-![Screenshot that shows the Metrics explorer view.](./media/get-metric/metrics-explorer.png)
### Cache metric reference for high-throughput usage
The examples in the previous section show zero-dimensional metrics. Metrics can
Running the sample code for at least 60 seconds results in three distinct telemetry items being sent to Azure. Each item represents the aggregation of one of the three form factors. As before, you can further examine in the **Logs (Analytics)** view.
-![Screenshot that shows the Log Analytics view of multidimensional metric.](./media/get-metric/log-analytics-multi-dimensional.png)
In the metrics explorer:
-![Screenshot that shows Custom metrics.](./media/get-metric/custom-metrics.png)
Notice that you can't split the metric by your new custom dimension or view your custom dimension with the metrics view.
-![Screenshot that shows splitting support.](./media/get-metric/splitting-support.png)
By default, multidimensional metrics within the metric explorer aren't turned on in Application Insights resources.
After you've made that change and sent new multidimensional telemetry, you can s
> [!NOTE] > Only newly sent metrics after the feature was turned on in the portal will have dimensions stored.
-![Screenshot that shows applying splitting.](./media/get-metric/apply-splitting.png)
View your metric aggregations for each `FormFactor` dimension.
-![Screenshot that shows form factors.](./media/get-metric/formfactor.png)
### Use MetricIdentifier when there are more than three dimensions
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
This is the list of addresses from which [availability web tests](./availability
If you're using Azure network security groups, add an *inbound port rule* to allow traffic from Application Insights availability tests. Select **Service Tag** as the **Source** and **ApplicationInsightsAvailability** as the **Source service tag**. >[!div class="mx-imgBorder"]
->![Screenshot that shows selecting Inbound security rules and then selecting Add.](./media/ip-addresses/add-inbound-security-rule.png)
+>:::image type="content" source="./media/ip-addresses/add-inbound-security-rule.png" lightbox="./media/ip-addresses/add-inbound-security-rule.png" alt-text="Screenshot that shows selecting Inbound security rules and then selecting Add.":::
>[!div class="mx-imgBorder"]
->![Screenshot that shows the Add inbound security rule tab.](./media/ip-addresses/add-inbound-security-rule2.png)
+>:::image type="content" source="./media/ip-addresses/add-inbound-security-rule2.png" lightbox="./media/ip-addresses/add-inbound-security-rule2.png" alt-text="Screenshot that shows the Add inbound security rule tab.":::
Open port 80 (HTTP) and port 443 (HTTPS) for incoming traffic from these addresses. IP addresses are grouped by location.
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
If you need to modify the behavior for only a single Application Insights resour
1. Select **Deploy**.
- ![Screenshot that shows the Deploy button.](media/ip-collection/deploy.png)
+ :::image type="content" source="media/ip-collection/deploy.png" lightbox="media/ip-collection/deploy.png" alt-text="Screenshot that shows the Deploy button.":::
1. Select **Edit template**.
- ![Screenshot that shows the Edit button, along with a warning about the resource group.](media/ip-collection/edit-template.png)
+ :::image type="content" source="media/ip-collection/edit-template.png" lightbox="media/ip-collection/edit-template.png" alt-text="Screenshot that shows the Edit button, along with a warning about the resource group.":::
> [!NOTE] > If you experience the error shown in the preceding screenshot, you can resolve it. It states: "The resource group is in a location that is not supported by one or more resources in the template. Please choose a different resource group." Temporarily select a different resource group from the dropdown list and then re-select your original resource group. 1. In the JSON template, locate `properties` inside `resources`. Add a comma to the last JSON field, and then add the following new line: `"DisableIpMasking": true`. Then select **Save**.
- ![Screenshot that shows the addition of a comma and a new line after the property for request source.](media/ip-collection/save.png)
+ :::image type="content" source="media/ip-collection/save.png" lightbox="media/ip-collection/save.png" alt-text="Screenshot that shows the addition of a comma and a new line after the property for request source.":::
1. Select **Review + create** > **Create**.
azure-monitor Java Jmx Metrics Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-jmx-metrics-configuration.md
Title: How to configure JMX metrics - Azure Monitor application insights for Java
-description: Configure additional JMX metrics collection for Azure Monitor Application Insights Java agent
+description: Configure extra JMX metrics collection for Azure Monitor Application Insights Java agent
Previously updated : 03/16/2021 Last updated : 05/13/2023 ms.devlang: java
# Configuring JMX metrics
-Application Insights Java 3.x collects some of the JMX metrics by default, but in many cases this is not enough. This document describes the JMX configuration option in details.
+Application Insights Java 3.x collects some of the JMX metrics by default, but in many cases it isn't enough. This document describes the JMX configuration option in details.
-## How do I collect additional JMX metrics?
+## How do I collect extra JMX metrics?
JMX metrics collection can be configured by adding a ```"jmxMetrics"``` section to the applicationinsights.json file. You can specify the name of the metric the way you want it to appear in Azure portal in application insights resource. Object name and attribute are required for each of the metrics you want collected.
To view the available metrics, set the self-diagnostics level to `DEBUG` in your
} ```
-The available JMX metrics, with the object names and attribute names will appear in the application insights log file.
+Available JMX metrics, with object names and attribute names, appear in your Application Insights log file.
-The output in the log file will look similar to the example below. In some cases the list can be quite extensive.
-> [!div class="mx-imgBorder"]
-> ![Screenshot of available JMX metrics in the log file.](media/java-ipa/jmx/available-mbeans.png)
+Log file output looks similar to these examples. In some cases, it can be extensive.
+
+> :::image type="content" source="media/java-ipa/jmx/available-mbeans.png" lightbox="media/java-ipa/jmx/available-mbeans.png" alt-text="Screenshot of available JMX metrics in the log file.":::
## Configuration example
-Knowing what metrics are available, you can configure the agent to collect those. The first one is an example of a nested metric - `LastGcInfo` that has several properties, and we want to capture the `GcThreadCount`.
+Knowing what metrics are available, you can configure the agent to collect them. The first one is an example of a nested metric - `LastGcInfo` that has several properties, and we want to capture the `GcThreadCount`.
```json "jmxMetrics": [
Knowing what metrics are available, you can configure the agent to collect those
## Types of collected metrics and available configuration options?
-We support numeric and boolean JMX metrics, while other types aren't supported and will be ignored.
+We support numeric and boolean JMX metrics, while other types aren't supported and is ignored.
Currently, the wildcards and aggregated attributes aren't supported, that's why every attribute 'object name'/'attribute' pair must be configured separately. ## Where do I find the JMX Metrics in application insights?
-As your application is running and the JMX metrics are collected, you can view them by going to Azure portal and navigate to your application insights resource. Under Metrics tab, select the dropdown as shown below to view the metrics.
+You can view the JMX metrics collected while your application is running by navigating to your application insights resource in the Azure portal. Under Metrics tab, select the dropdown as shown to view the metrics.
-> [!div class="mx-imgBorder"]
-> ![Screenshot of metrics in portal](media/java-ipa/jmx/jmx-portal.png)
+> :::image type="content" source="media/java-ipa/jmx/jmx-portal.png" lightbox="media/java-ipa/jmx/jmx-portal.png" alt-text="Screenshot of metrics in portal":::
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Title: Telemetry processor examples - Azure Monitor Application Insights for Java description: Explore examples that show telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 12/29/2020 Last updated : 05/13/2023 ms.devlang: java
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
Title: Telemetry processors (preview) - Azure Monitor Application Insights for Java description: Learn to configure telemetry processors in Azure Monitor Application Insights for Java. Previously updated : 10/29/2020 Last updated : 05/13/2023 ms.devlang: java
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
It measures time from the `ComponentDidMount` event through the `ComponentWillUn
To see this metric in the Azure portal, go to the Application Insights resource and select the **Metrics** tab. Configure the empty charts to display the custom metric name `React Component Engaged Time (seconds)`. Select the aggregation (for example, sum or avg) of your metric and split by `Component Name`.
-![Screenshot that shows a chart that displays the custom metric "React Component Engaged Time (seconds)" split by "Component Name"](./media/javascript-react-plugin/chart.png)
You can also run custom queries to divide Application Insights data to generate reports and visualizations as per your requirements. In the Azure portal, go to the Application Insights resource, select **Analytics** from the **Overview** tab, and run your query.
-![Screenshot that shows custom metric query results.](./media/javascript-react-plugin/query.png)
> [!NOTE] > It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
With Live Metrics, you can:
* Monitor any Windows performance counter live. * Easily identify a server that's having issues and filter all the KPI/live feed to just that server.
-![Screenshot that shows the Live Metrics tab.](./media/live-stream/live-metric.png)
Live Metrics is currently supported for ASP.NET, ASP.NET Core, Azure Functions, Java, and Node.js apps.
These capabilities are available with ASP.NET, ASP.NET Core, and Azure Functions
You can monitor custom KPI live by applying arbitrary filters on any Application Insights telemetry from the portal. Select the filter control that shows when you mouse-over any of the charts. The following chart plots a custom **Request** count KPI with filters on **URL** and **Duration** attributes. Validate your filters with the stream preview section that shows a live feed of telemetry that matches the criteria you've specified at any point in time.
-![Screenshot that shows the Filter request rate.](./media/live-stream/filter-request.png)
You can monitor a value different from **Count**. The options depend on the type of stream, which could be any Application Insights telemetry like requests, dependencies, exceptions, traces, events, or metrics. It can also be your own [custom measurement](./api-custom-events-metrics.md#properties).
-![Screenshot that shows the Query Builder on Request Rate with a custom metric.](./media/live-stream/query-builder-request.png)
Along with Application Insights telemetry, you can also monitor any Windows performance counter. Select it from the stream options and provide the name of the performance counter.
Live Metrics are aggregated at two points: locally on each server and then acros
## Sample telemetry: Custom live diagnostic events By default, the live feed of events shows samples of failed requests and dependency calls, exceptions, events, and traces. Select the filter icon to see the applied criteria at any point in time.
-![Screenshot that shows the Filter button.](./media/live-stream/filter.png)
As with metrics, you can specify any arbitrary criteria to any of the Application Insights telemetry types. In this example, we're selecting specific request failures and events.
-![Screenshot that shows the Query Builder.](./media/live-stream/query-builder.png)
> [!NOTE] > Currently, for exception message-based criteria, use the outermost exception message. In the preceding example, to filter out the benign exception with an inner exception message (follows the "<--" delimiter) "The client disconnected," use a message not-contains "Error reading request content" criteria. To see the details of an item in the live feed, select it. You can pause the feed either by selecting **Pause** or by scrolling down and selecting an item. Live feed resumes after you scroll back to the top, or when you select the counter of items collected while it was paused.
-![Screenshot that shows the Sample telemetry window with an exception selected and the exception details displayed at the bottom of the window.](./media/live-stream/sample-telemetry.png)
## Filter by server instance If you want to monitor a particular server role instance, you can filter by server. To filter, select the server name under **Servers**.
-![Screenshot that shows the Sampled live failures.](./media/live-stream/filter-by-server.png)
## Secure the control channel
It's possible to try custom filters without having to set up an authenticated ch
1. Select the **API Access** tab and then select **Create API key**.
- ![Screenshot that shows selecting the API Access tab and the Create API key button.](./media/live-stream/api-key.png)
+ :::image type="content" source="./media/live-stream/api-key.png" lightbox="./media/live-stream/api-key.png" alt-text="Screenshot that shows selecting the API Access tab and the Create API key button.":::
1. Select the **Authenticate SDK control channel** checkbox and then select **Generate key**.
- ![Screenshot that shows the Create API key pane. Select Authenticate SDK control channel checkbox and then select Generate key.](./media/live-stream/create-api-key.png)
+ :::image type="content" source="./media/live-stream/create-api-key.png" lightbox="./media/live-stream/create-api-key.png" alt-text="Screenshot that shows the Create API key pane. Select Authenticate SDK control channel checkbox and then select Generate key.":::
### Add an API key to configuration
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
For more advanced use cases, you can modify telemetry by adding spans, updating
1. **Option 1**: On the function app **Overview** pane, go to **Application Insights**. Under **Collection Level**, select **Recommended**. > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the how to enable the AppInsights Java Agent.](./media//functions/collection-level.jpg)
+ > :::image type="content" source="./media//functions/collection-level.jpg" lightbox="./media//functions/collection-level.jpg" alt-text="Screenshot that shows the how to enable the AppInsights Java Agent.":::
2. **Option 2**: On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**. > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the New application setting option.](./media//functions/create-new-setting.png)
+ > :::image type="content" source="./media//functions/create-new-setting.png" lightbox="./media//functions/create-new-setting.png" alt-text="Screenshot that shows the New application setting option.":::
Add an application setting with the following values and select **Save**.
azure-monitor Opencensus Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opencensus-python.md
import logging
from opencensus.ext.azure.log_exporter import AzureEventHandler logger = logging.getLogger(__name__)
-logger.addHandler(AzureLogHandler())
+logger.addHandler(AzureEventHandler())
# Alternatively manually pass in the connection_string
-# logger.addHandler(AzureLogHandler(connection_string=<appinsights-connection-string>))
+# logger.addHandler(AzureEventHandler(connection_string=<appinsights-connection-string>))
logger.setLevel(logging.INFO) logger.info('Hello, World!')
azure-monitor Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/overview-dashboard.md
Application Insights has always provided a summary overview pane to allow quick,
The new **Overview** dashboard now launches by default.
-![Screenshot that shows the Overview preview pane.](./media/overview-dashboard/overview.png)
## Better performance Time range selection has been simplified to a simple one-click interface.
-![Screenshot that shows the time range.](./media/overview-dashboard/app-insights-overview-dashboard-03.png)
Overall performance has been greatly increased. You have one-click access to popular features like **Search** and **Analytics**. Each default dynamically updating KPI tile provides insight into corresponding Application Insights features. To learn more about failed requests, under **Investigate**, select **Failures**.
-![Screenshot that shows failures.](./media/overview-dashboard/app-insights-overview-dashboard-04.png)
## Application dashboard
The application dashboard uses the existing dashboard technology within Azure to
To access the default dashboard, select **Application Dashboard** in the upper-left corner.
-![Screenshot that shows the Application Dashboard button.](./media/overview-dashboard/app-insights-overview-dashboard-05.png)
If this is your first time accessing the dashboard, it opens a default view.
-![Screenshot that shows the Dashboard view.](./media/overview-dashboard/0001-dashboard.png)
You can keep the default view if you like it. Or you can also add and delete from the dashboard to best fit the needs of your team.
You can keep the default view if you like it. Or you can also add and delete fro
To go back to the overview experience, select the **Overview** button.
-![Screenshot that shows the Overview button.](./media/overview-dashboard/app-insights-overview-dashboard-07.png)
## Troubleshooting
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
net localgroup "Performance Monitor Users" /add "IIS APPPOOL\NameOfYourPool"
The **Metrics** pane shows the default set of performance counters.
-![Screenshot that shows performance counters reported in Application Insights.](./media/performance-counters/performance-counters.png)
Current default counters for ASP.NET web applications:
You can search and display performance counter reports in [Log Analytics](../log
The **performanceCounters** schema exposes the `category`, `counter` name, and `instance` name of each performance counter. In the telemetry for each application, you'll see only the counters for that application. For example, to see what counters are available:
-![Screenshot that shows performance counters in Application Insights analytics.](./media/performance-counters/analytics-performance-counters.png)
Here, `Instance` refers to the performance counter instance, not the role or server machine instance. The performance counter instance name typically segments counters, such as processor time, by the name of the process or application. To get a chart of available memory over the recent period:
-![Screenshot that shows a memory time chart in Application Insights analytics.](./media/performance-counters/analytics-available-memory.png)
Like other telemetry, **performanceCounters** also has a column `cloud_RoleInstance` that indicates the identity of the host server instance on which your app is running. For example, to compare the performance of your app on the different machines:
-![Screenshot that shows performance segmented by role instance in Application Insights analytics.](./media/performance-counters/analytics-metrics-role-instance.png)
## ASP.NET and Application Insights counts
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
To automate the creation of any other resource of any kind, create an example ma
1. Open [Azure Resource Manager](https://resources.azure.com/). Navigate down through `subscriptions/resourceGroups/<your resource group>/providers/Microsoft.Insights/components` to your application resource.
- ![Screenshot that shows navigation in Azure Resource Explorer.](./media/powershell/01.png)
+ :::image type="content" source="./media/powershell/01.png" lightbox="./media/powershell/01.png" alt-text="Screenshot that shows navigation in Azure Resource Explorer.":::
*Components* are the basic Application Insights resources for displaying applications. There are separate resources for the associated alert rules and availability web tests. 1. Copy the JSON of the component into the appropriate place in `template1.json`.
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
There are several [ways of sending custom metrics from the Application Insights
All metrics that you send by using [trackMetric](./api-custom-events-metrics.md#trackmetric) or [GetMetric and TrackValue](./api-custom-events-metrics.md#getmetric) API calls are automatically stored in both logs and metrics stores. Although the log-based version of your custom metric always retains all dimensions, the pre-aggregated version of the metric is stored by default with no dimensions. You can turn on collection of dimensions of custom metrics on the [usage and estimated cost](../usage-estimated-costs.md#usage-and-estimated-costs) tab by selecting the **Enable alerting on custom metric dimensions** checkbox.
-![Screenshot that shows usage and estimated costs.](./media/pre-aggregated-metrics-log-metrics/001-cost.png)
## Quotas
The collection of custom metrics dimensions is turned off by default because in
Use [Azure Monitor metrics explorer](../essentials/metrics-getting-started.md) to plot charts from pre-aggregated and log-based metrics and to author dashboards with charts. After you select the Application Insights resource you want, use the namespace picker to switch between standard (preview) and log-based metrics. You can also select a custom metric namespace.
-![Screenshot that shows Metric namespace.](./media/pre-aggregated-metrics-log-metrics/002-metric-namespace.png)
## Pricing models for Application Insights metrics
azure-monitor Remove Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/remove-application-insights.md
# How to remove Application Insights in Visual Studio
-This article will show you how to remove the ASP.NET and ASP.NET Core Application Insights SDK in Visual Studio.
+This article shows you how to remove the ASP.NET and ASP.NET Core Application Insights SDK in Visual Studio.
-To remove Application Insights, you'll need to remove the NuGet packages and references from the API in your application. You can uninstall NuGet packages by using the Package Management Console or Manage NuGet Solution in Visual Studio. The following sections will show two ways to remove NuGet Packages and what was automatically added in your project. Be sure to confirm the files added and areas with in your own code in which you made calls to the API are removed.
+To remove Application Insights, you need to remove the NuGet packages and references from the API in your application. You can uninstall NuGet packages by using the Package Management Console or Manage NuGet Solution in Visual Studio. The following sections show two ways to remove NuGet Packages and what was automatically added in your project. Be sure to confirm the files added and areas with in your own code in which you made calls to the API are removed.
## Uninstall using the Package Management Console
To remove Application Insights, you'll need to remove the NuGet packages and ref
1. To open the Package Management Console, in the top menu select Tools > NuGet Package Manager > Package Manager Console.
- ![In the top menu click Tools > NuGet Package Manager > Package Manager Console](./media/remove-application-insights/package-manager.png)
+ :::image type="content" source="./media/remove-application-insights/package-manager.png" lightbox="./media/remove-application-insights/package-manager.png" alt-text="In the top menu click Tools > NuGet Package Manager > Package Manager Console":::
> [!NOTE] > If trace collection is enabled you need to first uninstall Microsoft.ApplicationInsights.TraceListener. Enter `Uninstall-package Microsoft.ApplicationInsights.TraceListener` then follow the step below to remove Microsoft.ApplicationInsights.Web.
To remove Application Insights, you'll need to remove the NuGet packages and ref
After entering the command, the Application Insights package and all of its dependencies will be uninstalled from the project.
- ![Enter command in console](./media/remove-application-insights/package-management-console.png)
+ :::image type="content" source="./media/remove-application-insights/package-management-console.png" lightbox="./media/remove-application-insights/package-management-console.png" alt-text="Enter command in console":::
# [.NET Core](#tab/netcore) 1. To open the Package Management Console, in the top menu select Tools > NuGet Package Manager > Package Manager Console.
- ![In the top menu click Tools > NuGet Package Manager > Package Manager Console](./media/remove-application-insights/package-manager.png)
+ :::image type="content" source="./media/remove-application-insights/package-manager.png" lightbox="./media/remove-application-insights/package-manager.png" alt-text="In the top menu click Tools > NuGet Package Manager > Package Manager Console":::
1. Enter the following command: ` Uninstall-Package Microsoft.ApplicationInsights.AspNetCore -RemoveDependencies`
To remove Application Insights, you'll need to remove the NuGet packages and ref
You'll then see a screen that allows you to edit all the NuGet packages that are part of the project.
- ![Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution](./media/remove-application-insights/manage-nuget-framework.png)
+ :::image type="content" source="./media/remove-application-insights/manage-nuget-framework.png" lightbox="./media/remove-application-insights/manage-nuget-framework.png" alt-text="Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution":::
> [!NOTE] > If trace collection is enabled you need to first uninstall Microsoft.ApplicationInsights.TraceListener without remove dependencies selected and then follow the steps below to uninstall Microsoft.ApplicationInsights.Web with remove dependencies selected.
To remove Application Insights, you'll need to remove the NuGet packages and ref
1. Select **Uninstall**.
- ![Screenshot shows the Microsoft.ApplicationInsights.Web window with Remove dependencies checked and uninstall highlighted.](./media/remove-application-insights/uninstall-framework.png)
+ :::image type="content" source="./media/remove-application-insights/uninstall-framework.png" lightbox="./media/remove-application-insights/uninstall-framework.png" alt-text="Screenshot shows the Microsoft.ApplicationInsights.Web window with Remove dependencies checked and uninstall highlighted.":::
- A dialog box will display that shows all of the dependencies to be removed from the application. Select **ok** to uninstall.
+ A dialog box displays that shows all of the dependencies to be removed from the application. Select **ok** to uninstall.
- ![Screenshot shows a dialog box with the dependencies to be removed.](./media/remove-application-insights/preview-uninstall-framework.png)
+ :::image type="content" source="./media/remove-application-insights/preview-uninstall-framework.png" lightbox="./media/remove-application-insights/preview-uninstall-framework.png" alt-text="Screenshot shows a dialog box with the dependencies to be removed.":::
1. After everything is uninstalled, you may still see "ApplicationInsights.config" and "AiHandleErrorAttribute.cs" in the *Solution Explorer*. You can delete the two files manually.
To remove Application Insights, you'll need to remove the NuGet packages and ref
You'll then see a screen that allows you to edit all the NuGet packages that are part of the project.
- ![Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution](./media/remove-application-insights/manage-nuget-core.png)
+ :::image type="content" source="./media/remove-application-insights/manage-nuget-core.png" lightbox="./media/remove-application-insights/manage-nuget-core.png" alt-text="Right click Solution, in the Solution Explorer, then select Manage NuGet Packages for Solution":::
1. Click on "Microsoft.ApplicationInsights.AspNetCore" package. On the right, check the checkbox next to *Project* to select all projects then select **Uninstall**.
- ![Check remove dependencies, then uninstall](./media/remove-application-insights/uninstall-core.png)
+ :::image type="content" source="./media/remove-application-insights/uninstall-core.png" lightbox="./media/remove-application-insights/uninstall-core.png" alt-text="Check remove dependencies, then uninstall":::
## What is created when you add Application Insights
-When you add Application Insights to your project, it creates files and adds code to some of your files. Solely uninstalling the NuGet Packages will not always discard the files and code. To fully remove Application Insights, you should check and manually delete the added code or files along with any API calls you added in your project.
+When you add Application Insights to your project, it creates files and adds code to some of your files. Solely uninstalling the NuGet Packages won't always discard the files and code. To fully remove Application Insights, you should check and manually delete the added code or files along with any API calls you added in your project.
# [.NET](#tab/net)
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
Title: Resources, roles, and access control in Application Insights | Microsoft Docs description: Owners, contributors and readers of your organization's insights. Previously updated : 02/14/2019 Last updated : 04/13/2023 -+ # Resources, roles, and access control in Application Insights
First, let's define some terms:
To see your resources, open the [Azure portal][portal], sign in, and select **All resources**. To find a resource, enter part of its name in the filter field.
- ![Screenshot that shows a list of Azure resources.](./media/resources-roles-access-control/10-browse.png)
+ :::image type="content" source="./media/resources-roles-access-control/10-browse.png" lightbox="./media/resources-roles-access-control/10-browse.png" alt-text="Screenshot that shows a list of Azure resources.":::
<a name="resource-group"></a>
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Use this type of sampling if your app often goes over its monthly quota and you
Set the sampling rate in the Usage and estimated costs page:
-![From the application's Overview pane, click Settings, Quota, Samples, then select a sampling rate, and click Update.](./media/sampling/data-sampling.png)
Like other types of sampling, the algorithm retains related telemetry items. For example, when you're inspecting the telemetry in Search, you'll be able to find the request related to a particular exception. Metric counts such as request rate and exception rate are correctly retained.
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
Components are independently deployable parts of your distributed or microservic
This view has four key parts: a results list, a cross-component transaction chart, a time-sequence list of all telemetry related to this operation, and the details pane for any selected telemetry item on the left.
-![Screenshot that shows the four key parts of the view.](media/transaction-diagnostics/4partsCrossComponent.png)
## Cross-component transaction chart
This chart provides a timeline with horizontal bars during requests and dependen
This section shows a flat list view in a time sequence of all the telemetry related to this transaction. It also shows the custom events and traces that aren't displayed in the transaction chart. You can filter this list to telemetry generated by a specific component or call. You can select any telemetry item in this list to see corresponding [details on the right](#details-of-the-selected-telemetry).
-![Screenshot that shows the time sequence of all telemetry.](media/transaction-diagnostics/allTelemetryDrawerOpened.png)
## Details of the selected telemetry This collapsible pane shows the detail of any selected item from the transaction chart or the list. **Show all** lists all the standard attributes that are collected. Any custom attributes are listed separately under the standard set. Select the ellipsis button (...) under the **Call Stack** trace window to get an option to copy the trace. **Open profiler traces** and **Open debug snapshot** show code-level diagnostics in corresponding detail panes.
-![Screenshot that shows exception details.](media/transaction-diagnostics/exceptiondetail.png)
## Search results This collapsible pane shows the other results that meet the filter criteria. Select any result to update the respective details of the preceding three sections. We try to find samples that are most likely to have the details available from all components, even if sampling is in effect in any of them. These samples are shown as suggestions.
-![Screenshot that shows search results.](media/transaction-diagnostics/searchResults.png)
## Profiler and Snapshot Debugger
If you can't get Profiler working, contact serviceprofilerhelp\@microsoft.com.
If you can't get Snapshot Debugger working, contact snapshothelp\@microsoft.com.
-![Screenshot that shows Profiler integration.](media/transaction-diagnostics/profilerTraces.png)
## Frequently asked questions
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
A single dashboard can contain resources from multiple applications, resource gr
1. In the menu dropdown on the left in the Azure portal, select **Dashboard**.
- ![Screenshot that shows the Azure portal menu dropdown.](media/tutorial-app-dashboards/dashboard-from-menu.png)
+ :::image type="content" source="media/tutorial-app-dashboards/dashboard-from-menu.png" lightbox="media/tutorial-app-dashboards/dashboard-from-menu.png" alt-text="Screenshot that shows the Azure portal menu dropdown.":::
1. On the **Dashboard** pane, select **New dashboard** > **Blank dashboard**.
- ![Screenshot that shows the Dashboard pane.](media/tutorial-app-dashboards/new-dashboard.png)
+ :::image type="content" source="media/tutorial-app-dashboards/new-dashboard.png" lightbox="media/tutorial-app-dashboards/new-dashboard.png" alt-text="Screenshot that shows the Dashboard pane.":::
1. Enter a name for the dashboard. 1. Look at the **Tile Gallery** for various tiles that you can add to your dashboard. You can also pin charts and other views directly from Application Insights to the dashboard. 1. Locate the **Markdown** tile and drag it on to your dashboard. With this tile, you can add text formatted in Markdown, which is ideal for adding descriptive text to your dashboard. To learn more, see [Use a Markdown tile on Azure dashboards to show custom content](../../azure-portal/azure-portal-markdown-tile.md). 1. Add text to the tile's properties and resize it on the dashboard canvas.
- [![Screenshot that shows the Edit Markdown tile.](media/tutorial-app-dashboards/markdown.png)](media/tutorial-app-dashboards/markdown.png#lightbox)
+ [:::image type="content" source="media/tutorial-app-dashboards/markdown.png" lightbox="media/tutorial-app-dashboards/markdown.png" alt-text="Screenshot that shows the Edit Markdown tile.":::
1. Select **Done customizing** at the top of the screen to exit tile customization mode.
A dashboard with static text isn't very interesting, so add a tile from Applicat
Start by adding the standard health overview for your application. This tile requires no configuration and allows minimal customization in the dashboard. 1. Select your **Application Insights** resource on the home screen.
-1. On the **Overview** pane, select the pin icon ![pin icon](media/tutorial-app-dashboards/pushpin.png) to add the tile to a dashboard.
+1. On the **Overview** pane, select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="pin icon"::: to add the tile to a dashboard.
1. On the **Pin to dashboard** tab, select which dashboard to add the tile to or create a new one. 1. At the top right, a notification appears that your tile was pinned to your dashboard. Select **Pinned to dashboard** in the notification to return to your dashboard or use the **Dashboard** pane. 1. Select **Edit** to change the positioning of the tile you added to your dashboard. Select and drag it into position and then select **Done customizing**. Your dashboard now has a tile with some useful information.
- [![Screenshot that shows the dashboard in edit mode.](media/tutorial-app-dashboards/dashboard-edit-mode.png)](media/tutorial-app-dashboards/dashboard-edit-mode.png#lightbox)
+ [:::image type="content" source="media/tutorial-app-dashboards/dashboard-edit-mode.png" lightbox="media/tutorial-app-dashboards/dashboard-edit-mode.png" alt-text="Screenshot that shows the dashboard in edit mode.":::
## Add custom metric chart
You can use the **Metrics** panel to graph a metric collected by Application Ins
1. Select **Metrics**. 1. An empty chart appears, and you're prompted to add a metric. Add a metric to the chart and optionally add a filter and a grouping. The following example shows the number of server requests grouped by success. This chart gives a running view of successful and unsuccessful requests.
- [![Screenshot that shows adding a metric.](media/tutorial-app-dashboards/metrics.png)](media/tutorial-app-dashboards/metrics.png#lightbox)
+ [:::image type="content" source="media/tutorial-app-dashboards/metrics.png" lightbox="media/tutorial-app-dashboards/metrics.png" alt-text="Screenshot that shows adding a metric.":::
1. Select **Pin to dashboard** on the right.
Application Insights Logs provides a rich query language that you can use to ana
``` 1. Select **Run** to validate the results of the query.
-1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) and then select the name of your dashboard.
+1. Select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="Pin icon"::: and then select the name of your dashboard.
1. Before you go back to the dashboard, add another query, but render it as a chart. Now you'll see the different ways to visualize a logs query in a dashboard. Start with the following query that summarizes the top 10 operations with the most exceptions:
Application Insights Logs provides a rich query language that you can use to ana
1. Select **Chart** and then select **Doughnut** to visualize the output.
- [![Screenshot that shows the doughnut chart with the preceding query.](media/tutorial-app-dashboards/logs-doughnut.png)](media/tutorial-app-dashboards/logs-doughnut.png#lightbox)
+ [:::image type="content" source="media/tutorial-app-dashboards/logs-doughnut.png" lightbox="media/tutorial-app-dashboards/logs-doughnut.png" alt-text="Screenshot that shows the doughnut chart with the preceding query.":::
-1. Select the pin icon ![Pin icon](media/tutorial-app-dashboards/pushpin.png) at the top right to pin the chart to your dashboard. Then return to your dashboard.
+1. Select the pin icon :::image type="content" source="media/tutorial-app-dashboards/pushpin.png" lightbox="media/tutorial-app-dashboards/pushpin.png" alt-text="Pin icon"::: at the top right to pin the chart to your dashboard. Then return to your dashboard.
1. The results of the queries are added to your dashboard in the format that you selected. Select and drag each result into position. Then select **Done customizing**.
-1. Select the pencil icon ![Pencil icon](media/tutorial-app-dashboards/pencil.png) on each title and use it to make the titles descriptive.
+1. Select the pencil icon :::image type="content" source="media/tutorial-app-dashboards/pencil.png" lightbox="media/tutorial-app-dashboards/pencil.png" alt-text="Pencil icon"::: on each title and use it to make the titles descriptive.
## Share dashboard
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
- Title: Diagnose performance issues using Application Insights | Microsoft Docs
-description: Tutorial to find and diagnose performance issues in your application by using Application Insights.
- Previously updated : 11/15/2022----
-# Find and diagnose performance issues with Application Insights
-
-Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that might be occurring or to identify improvements to the application that would most affect users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client.
-
-You learn how to:
-
-> [!div class="checklist"]
-> * Identify the performance of server-side operations.
-> * Analyze server operations to determine the root cause of slow performance.
-> * Identify the slowest client-side operations.
-> * Analyze details of page views by using query language.
-
-## Prerequisites
-
-To complete this tutorial:
--- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:
- - ASP.NET and web development
- - Azure development
-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- [Enable the Application Insights profiler](../app/profiler.md) for your application.-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Identify slow server operations
-
-Application Insights collects performance details for the different operations in your application. By identifying the operations with the longest duration, you can diagnose potential problems or target your ongoing development to improve the overall performance of the application.
-
-1. Select **Application Insights** and then select your subscription.
-1. To open the **Performance** panel, either select **Performance** under the **Investigate** menu or select the **Server response time** graph.
-
- ![Screenshot that shows the Performance view.](media/tutorial-performance/1-overview.png)
-
-1. The **Performance** screen shows the count and average duration of each operation for the application. You can use this information to identify those operations that affect users the most. In this example, the **GET Customers/Details** and **GET Home/Index** are likely candidates to investigate because of their relatively high duration and number of calls. Other operations might have a higher duration but were rarely called, so the effect of their improvement would be minimal.
-
- ![Screenshot that shows the Performance server panel.](media/tutorial-performance/2-server-operations.png)
-
-1. The graph currently shows the average duration of the selected operations over time. You can switch to the 95th percentile to find the performance issues. Add the operations you're interested in by pinning them to the graph. The graph shows that there are some peaks worth investigating. To isolate them further, reduce the time window of the graph.
-
- ![Screenshot that shows Pin operations.](media/tutorial-performance/3-server-operations-95th.png)
-
-1. The performance panel on the right shows distribution of durations for different requests for the selected operation. Reduce the window to start around the 95th percentile. The **Top 3 Dependencies** insights card can tell you at a glance that the external dependencies are likely contributing to the slow transactions. Select the button with the number of samples to see a list of the samples. Then select any sample to see transaction details.
-
-1. You can see at a glance that the call to the Fabrikamaccount Azure Table contributes most to the total duration of the transaction. You can also see that an exception caused it to fail. Select any item in the list to see its details on the right side. [Learn more about the transaction diagnostics experience](../app/transaction-diagnostics.md)
-
- ![Screenshot that shows Operation end-to-end transaction details.](media/tutorial-performance/4-end-to-end.png)
-
-1. The [Profiler](../app/profiler-overview.md) helps get further with code-level diagnostics by showing the actual code that ran for the operation and the time required for each step. Some operations might not have a trace because the Profiler runs periodically. Over time, more operations should have traces. To start the Profiler for the operation, select **Profiler traces**.
-1. The trace shows the individual events for each operation so that you can diagnose the root cause for the duration of the overall operation. Select one of the top examples that has the longest duration.
-1. Select **Hot path** to highlight the specific path of events that contribute the most to the total duration of the operation. In this example, you can see that the slowest call is from the `FabrikamFiberAzureStorage.GetStorageTableData` method. The part that takes the most time is the `CloudTable.CreateIfNotExist` method. If this line of code is executed every time the function gets called, unnecessary network call and CPU resources will be consumed. The best way to fix your code is to put this line in some startup method that executes only once.
-
- ![Screenshot that shows Profiler details.](media/tutorial-performance/5-hot-path.png)
-
-1. The **Performance Tip** at the top of the screen supports the assessment that the excessive duration is because of waiting. Select the **waiting** link for documentation on interpreting the different types of events.
-
- ![Screenshot that shows a Performance Tip.](media/tutorial-performance/6-perf-tip.png)
-
-1. For further analysis, select **Download Trace** to download the trace. You can view this data by using [PerfView](https://github.com/Microsoft/perfview#perfview-overview).
-
-## Use logs data for server
-
- Logs provides a rich query language that you can use to analyze all data collected by Application Insights. You can use this feature to perform deep analysis on request and performance data.
-
-1. Return to the operation detail panel and select ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png)**View in Logs (Analytics)**.
-
-1. The **Logs** screen opens with a query for each of the views in the panel. You can run these queries as they are or modify them for your requirements. The first query shows the duration for this operation over time.
-
- ![Screenshot that shows a logs query.](media/tutorial-performance/7-request-time-logs.png)
-
-## Identify slow client operations
-
-In addition to identifying server processes to optimize, Application Insights can analyze the perspective of client browsers. This information can help you identify potential improvements to client components and even identify issues with different browsers or different locations.
-
-1. Select **Browser** under **Investigate** and then select **Browser Performance**. Alternatively, select **Performance** under **Investigate** and switch to the **Browser** tab by selecting the **Server/Browser** toggle button in the upper-right corner to open the browser performance summary. This view provides a visual summary of various telemetries of your application from the perspective of the browser.
-
- ![Screenshot that shows the Browser summary.](media/tutorial-performance/8-browser.png)
-
-1. Select one of the operation names, select the **Samples** button at the bottom right, and then select an operation. End-to-end transaction details open on the right side where you can view the **Page View Properties**. You can view details of the client requesting the page including the type of browser and its location. This information can assist you in determining whether there are performance issues related to particular types of clients.
-
- ![Screenshot that shows Page View Properties.](media/tutorial-performance/9-page-view-properties.png)
-
-## Use logs data for client
-
-Like the data collected for server performance, Application Insights makes all client data available for deep analysis by using logs.
-
-1. Return to the browser summary and select ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png) **View in Logs (Analytics)**.
-
-1. The **Logs** screen opens with a query for each of the views in the panel. The first query shows the duration for different page views over time.
-
- ![Screenshot that shows the Logs screen.](media/tutorial-performance/10-page-view-logs.png)
-
-## Next steps
-
-Now that you've learned how to identify runtime exceptions, proceed to the next tutorial to learn how to create alerts in response to failures.
-
-> [!div class="nextstepaction"]
-> [Standard test](availability-standard-tests.md)
azure-monitor Tutorial Runtime Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-runtime-exceptions.md
- Title: Diagnose runtime exceptions by using Application Insights | Microsoft Docs
-description: Tutorial to find and diagnose runtime exceptions in your application by using Application Insights.
- Previously updated : 09/19/2017----
-# Find and diagnose runtime exceptions with Application Insights
-
-Application Insights collects telemetry from your application to help identify and diagnose runtime exceptions. This tutorial takes you through this process with your application. You learn how to:
-
-> [!div class="checklist"]
-> * Modify your project to enable exception tracking.
-> * Identify exceptions for different components of your application.
-> * View details of an exception.
-> * Download a snapshot of the exception to Visual Studio for debugging.
-> * Analyze details of failed requests by using query language.
-> * Create a new work item to correct the faulty code.
-
-## Prerequisites
-
-To complete this tutorial:
--- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:
- - ASP.NET and web development
- - Azure development
-- Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger).-- Enable the [Visual Studio Snapshot Debugger](../app/snapshot-debugger.md).-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- Modify your code in your development or test environment to generate an exception because the tutorial tracks the identification of an exception in your application.-
-## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Analyze failures
-Application Insights collects any failures in your application. It lets you view their frequency across different operations to help you focus your efforts on those issues with the highest impact. You can then drill down on details of these failures to identify the root cause.
-
-1. Select **Application Insights** and then select your subscription.
-1. To open the **Failures** pane, either select **Failures** under the **Investigate** menu or select the **Failed requests** graph.
-
- ![Screenshot that shows failed requests.](media/tutorial-runtime-exceptions/failed-requests.png)
-
-1. The **Failed requests** pane shows the count of failed requests and the number of users affected for each operation for the application. By sorting this information by user, you can identify those failures that most affect users. In this example, **GET Employees/Create** and **GET Customers/Details** are likely candidates to investigate because of their large number of failures and affected users. Selecting an operation shows more information about this operation in the right pane.
-
- ![Screenshot that shows the Failed requests pane.](media/tutorial-runtime-exceptions/failed-requests-blade.png)
-
-1. Reduce the time window to zoom in on the period where the failure rate shows a spike.
-
- ![Screenshot that shows the Failed requests window.](media/tutorial-runtime-exceptions/failed-requests-window.png)
-
-1. See the related samples by selecting the button with the number of filtered results. The **Suggested** samples have related telemetry from all components, even if sampling might have been in effect in any of them. Select a search result to see the details of the failure.
-
- ![Screenshot that shows the Failed request samples.](media/tutorial-runtime-exceptions/failed-requests-search.png)
-
-1. The details of the failed request show the Gantt chart that shows that there were two dependency failures in this transaction, which also contributed to more than 50% of the total duration of the transaction. This experience presents all telemetry across components of a distributed application that are related to this operation ID. To learn more about the new experience, see [Unified cross-component transaction diagnostics](../app/transaction-diagnostics.md). You can select any of the items to see their details on the right side.
-
- ![Screenshot that shows Failed request details.](media/tutorial-runtime-exceptions/failed-request-details.png)
-
-1. The operations detail also shows a format exception, which appears to have caused the failure. You can see that it's because of an invalid Zip Code. You can open the debug snapshot to see code-level debug information in Visual Studio.
-
- ![Screenshot that shows exception details.](media/tutorial-runtime-exceptions/failed-requests-exception.png)
-
-## Identify failing code
-The Snapshot Debugger collects snapshots of the most frequent exceptions in your application to assist you in diagnosing its root cause in production. You can view debug snapshots in the portal to see the call stack and inspect variables at each call stack frame. Afterward, you can debug the source code by downloading the snapshot and opening it in Visual Studio 2019 Enterprise.
-
-1. In the properties of the exception, select **Open debug snapshot**.
-1. The **Debug Snapshot** pane opens with the call stack for the request. Select any method to view the values of all local variables at the time of the request. Starting from the top method in this example, you can see local variables that have no value.
-
- ![Screenshot that shows the Debug Snapshot pane.](media/tutorial-runtime-exceptions/debug-snapshot-01.png)
-
-1. The first call that has valid values is **ValidZipCode**. You can see that a Zip Code was provided with letters that can't be translated into an integer. This issue appears to be the error in the code that must be corrected.
-
- ![Screenshot that shows an error in the code that must be corrected.](media/tutorial-runtime-exceptions/debug-snapshot-02.png)
-
-1. You can then download this snapshot into Visual Studio where you can locate the actual code that must be corrected. To do so, select **Download Snapshot**.
-1. The snapshot is loaded into Visual Studio.
-1. You can now run a debug session in Visual Studio Enterprise that quickly identifies the line of code that caused the exception.
-
- ![Screenshot that shows an exception in the code.](media/tutorial-runtime-exceptions/exception-code.png)
-
-## Use analytics data
-All data collected by Application Insights is stored in Azure Log Analytics, which provides a rich query language that you can use to analyze the data in various ways. You can use this data to analyze the requests that generated the exception you're researching.
-
-1. Select the CodeLens information above the code to view telemetry provided by Application Insights.
-
- ![Screenshot that shows code in CodeLens.](media/tutorial-runtime-exceptions/codelens.png)
-
-1. Select **Analyze impact** to open Application Insights Analytics. It's populated with several queries that provide details on failed requests, such as affected users, browsers, and regions.<br><br>
-
- ![Screenshot that shows Application Insights window that includes several queries.](media/tutorial-runtime-exceptions/analytics.png)<br>
-
-## Add a work item
-If you connect Application Insights to a tracking system, such as Azure DevOps or GitHub, you can create a work item directly from Application Insights.
-
-1. Return to the **Exception Properties** pane in Application Insights.
-1. Select **New Work Item**.
-1. The **New Work Item** pane opens with details about the exception already populated. You can add more information before you save it.
-
- ![Screenshot that shows the New Work Item pane.](media/tutorial-runtime-exceptions/new-work-item.png)
-
-## Next steps
-Now that you've learned how to identify runtime exceptions, advance to the next tutorial to learn how to identify and diagnose performance issues.
-
-> [!div class="nextstepaction"]
-> [Identify performance issues](./tutorial-performance.md)
azure-monitor Tutorial Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-users.md
- Title: Understand your customers in Application Insights | Microsoft Docs
-description: Tutorial on how to use Application Insights to understand how customers are using your application.
- Previously updated : 07/30/2021----
-# Use Application Insights to understand how customers use your application
-
- Application Insights collects usage information to help you understand how your users interact with your application. This tutorial walks you through the different resources that are available to analyze this information.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Analyze details about users who access your application.
-> * Use session information to analyze how customers use your application.
-> * Define funnels that let you compare your desired user activity to their actual activity.
-> * Create a workbook to consolidate visualizations and queries into a single document.
-> * Group similar users to analyze them together.
-> * Learn which users are returning to your application.
-> * Inspect how users move through your application.
-
-## Prerequisites
-
-To complete this tutorial:
--- Install [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the following workloads:
- - ASP.NET and web development.
- - Azure development.
-- Download and install the [Visual Studio Snapshot Debugger](https://aka.ms/snapshotdebugger).-- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md).-- [Send telemetry from your application](../app/usage-overview.md#send-telemetry-from-your-app) for adding custom events/page views.-- Send [user context](./usage-overview.md) to track what a user does over time and fully utilize the usage features.-
-## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Get information about your users
-The **Users** pane helps you to understand important details about your users in various ways. You can use this pane to understand information like where your users are connecting from, details of their client, and what areas of your application they're accessing.
-
-1. In your Application Insights resource, under **Usage**, select **Users**.
-1. The default view shows the number of unique users that have connected to your application over the past 24 hours. You can change the time window and set various other criteria to filter this information.
-
-1. Select the **During** dropdown list and change the time window to **7 days**. This setting increases the data included in the different charts in the pane.
-
-1. Select the **Split by** dropdown list to add a breakdown by a user property to the graph. Select **Country or region**. The graph includes the same data, but you can use it to view a breakdown of the number of users for each country/region.
-
- :::image type="content" source="./media/tutorial-users/user-1.png" alt-text="Screenshot that shows the User tab's query builder." lightbox="./media/tutorial-users/user-1.png":::
-
-1. Position the cursor over different bars in the chart and note that the count for each country/region reflects only the time window represented by that bar.
-1. Select **View More Insights** for more information.
-
- :::image type="content" source="./media/tutorial-users/user-2.png" alt-text="Screenshot that shows the User tab of view more insights." lightbox="./media/tutorial-users/user-2.png":::
-
-## Analyze user sessions
-The **Sessions** pane is similar to the **Users** pane. **Users** helps you understand details about the users who access your application. **Sessions** helps you understand how those users used your application.
-
-1. Under **Usage**, select **Sessions**.
-1. Look at the graph and note that you have the same options to filter and break down the data as in the **Users** pane.
-
- :::image type="content" source="./media/tutorial-users/sessions.png" alt-text="Screenshot that shows the Sessions tab with a bar chart displayed." lightbox="./media/tutorial-users/sessions.png":::
-
-1. To view the sessions timeline, select **View More Insights**. Under **Active Sessions**, select **View session timeline** on one of the timelines. The **Session Timeline** pane shows every action in the sessions. This information can help you identify examples like sessions with a large number of exceptions.
-
- :::image type="content" source="./media/tutorial-users/timeline.png" alt-text="Screenshot that shows the Sessions tab with a timeline selected." lightbox="./media/tutorial-users/timeline.png":::
-
-## Group together similar users
-A cohort is a set of users grouped by similar characteristics. You can use cohorts to filter data in other panes so that you can analyze particular groups of users. For example, you might want to analyze only users who completed a purchase.
-
-1. On the **Users**, **Sessions**, or **Events** tab, select **Create a Cohort**.
-
-1. Select a template from the gallery.
-
- :::image type="content" source="./media/tutorial-users/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/tutorial-users/cohort.png":::
-1. Edit your cohort and select **Save**.
-1. To see your cohort, select it from the **Show** dropdown list.
-
- :::image type="content" source="./media/tutorial-users/cohort-2.png" alt-text="Screenshot that shows the Show dropdown, showing a cohort." lightbox="./media/tutorial-users/cohort-2.png":::
-
-## Compare desired activity to reality
-The previous panes are focused on what users of your application did. The **Funnels** pane focuses on what you want users to do. A funnel represents a set of steps in your application and the percentage of users who move between steps.
-
-For example, you could create a funnel that measures the percentage of users who connect to your application and search for a product. You can then see the percentage of users who add that product to a shopping cart. You can also see the percentage of customers who complete a purchase.
-
-1. Select **Funnels** > **Edit**.
-
-1. Create a funnel with at least two steps by selecting an action for each step. The list of actions is built from usage data collected by Application Insights.
-
- :::image type="content" source="./media/tutorial-users/funnel.png" alt-text="Screenshot that shows the Funnel tab and selecting steps on the edit tab." lightbox="./media/tutorial-users/funnel.png":::
-
-1. Select the **View** tab to see the results. The window to the right shows the most common events before the first activity and after the last activity to help you understand user tendencies around the particular sequence.
-
- :::image type="content" source="./media/tutorial-users/funnel-2.png" alt-text="Screenshot that shows the funnel tab on view." lightbox="./media/tutorial-users/funnel-2.png":::
-
-1. To save the funnel, select **Save**.
-
-## Learn which customers return
-
-Retention helps you understand which users are coming back to your application.
-
-1. Select **Retention** > **Retention Analysis Workbook**.
-1. By default, the analyzed information includes users who performed an action and then returned to perform another action. For example, you can change this filter to include only those users who returned after they completed a purchase.
-
- :::image type="content" source="./media/tutorial-users/retention.png" alt-text="Screenshot that shows a graph for users that match the criteria set for a retention filter." lightbox="./media/tutorial-users/retention.png":::
-
-1. The returning users that match the criteria are shown in graphical and table form for different time durations. The typical pattern is for a gradual drop in returning users over time. A sudden drop from one time period to the next might raise a concern.
-
- :::image type="content" source="./media/tutorial-users/retention-2.png" alt-text="Screenshot that shows the retention workbook with the User returned after # of weeks chart." lightbox="./media/tutorial-users/retention-2.png":::
-
-## Analyze user movements
-A user flow visualizes how users move between the pages and features of your application. The flow helps you answer questions like where users typically move from a particular page, how they usually exit your application, and if there are any actions that are regularly repeated.
-
-1. Select **User flows** on the menu.
-1. Select **New** to create a new user flow. Select **Edit** to edit its details.
-1. Increase **Time Range** to **7 days** and then select an initial event. The flow will track user sessions that start with that event.
-
- :::image type="content" source="./media/tutorial-users/flowsedit.png" alt-text="Screenshot that shows how to create a new user flow." lightbox="./media/tutorial-users/flowsedit.png":::
-
-1. The user flow is displayed, and you can see the different user paths and their session counts. Blue lines indicate an action that the user performed after the current action. A red line indicates the end of the user session.
-
- :::image type="content" source="./media/tutorial-users/flows.png" alt-text="Screenshot that shows the display of user paths and session counts for a user flow." lightbox="./media/tutorial-users/flows.png":::
-
-1. To remove an event from the flow, select the **X** in the upper-right corner of the action. Then select **Create Graph**. The graph is redrawn with any instances of that event removed. Select **Edit** to see that the event is now added to **Excluded events**.
-
- :::image type="content" source="./media/tutorial-users/flowsexclude.png" alt-text="Screenshot that shows the list of excluded events for a user flow." lightbox="./media/tutorial-users/flowsexclude.png":::
-
-## Consolidate usage data
-Workbooks combine data visualizations, Log Analytics queries, and text into interactive documents. You can use workbooks to:
-- Group together common usage information.-- Consolidate information from a particular incident.-- Report back to your team on your application's usage.-
-1. Select **Workbooks** on the menu.
-1. Select **New** to create a new workbook.
-1. A query that's provided includes all usage data in the last day displayed as a bar chart. You can use this query, manually edit it, or select **Samples** to select from other useful queries.
-
- :::image type="content" source="./media/tutorial-users/sample-queries.png" alt-text="Screenshot that shows the sample button and list of sample queries that you can use." lightbox="./media/tutorial-users/sample-queries.png":::
-
-1. Select **Done editing**.
-1. Select **Edit** in the top pane to edit the text at the top of the workbook. Formatting is done by using Markdown.
-
-1. Select **Add users** to add a graph with user information. Edit the details of the graph if you want. Then select **Done editing** to save it.
-
-To learn more about workbooks, see the [workbooks overview](../visualize/workbooks-overview.md).
-
-## Next steps
-You've learned how to analyze your users. In the next tutorial, you'll learn how to create custom dashboards that combine this information with other useful data about your application.
-
-> [!div class="nextstepaction"]
-> [Create custom dashboards](./tutorial-app-dashboards.md)
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
Title: Application Insights User Flows analyzes navigation flows description: Analyze how users move between the pages and features of your web app. Previously updated : 07/30/2021 Last updated : 05/13/2023 # Analyze user navigation patterns with User Flows in Application Insights
-![Screenshot that shows the Application Insights User Flows tool.](./media/usage-flows/flows.png)
The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like:
The User Flows tool starts from an initial page view, custom event, or exception
## Choose an initial event
-![Screenshot that shows choosing an initial event for User Flows.](./media/usage-flows/initial-event.png)
To begin answering questions with the User Flows tool, choose an initial page view, custom event, or exception to serve as the starting point for the visualization:
If you want to see more steps in the visualization, use the **Previous steps** a
## After users visit a page or feature, where do they go and what do they select?
-![Screenshot that shows using User Flows to understand where users select.](./media/usage-flows/one-step.png)
If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page.
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
Title: HEART analytics workbook
-description: Product teams use the HEART Workbook to measure success across five user-centric dimensions to deliver better software.
+description: Product teams can use the HEART workbook to measure success across five user-centric dimensions to deliver better software.
Previously updated : 11/11/2021- Last updated : 05/13/2023+
-# Analyzing product usage with HEART
-This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software.
+# Analyze product usage with HEART
+This article describes how to enable and use the Heart Workbook on Azure Monitor. The HEART workbook is based on the HEART measurement framework, which was originally introduced by Google. Several Microsoft internal teams use HEART to deliver better software.
-
## Overview
-HEART is an acronym that stands for Happiness, Engagement, Adoption, Retention, and Task Success. It helps product teams deliver better software by focusing on the following five dimensions of customer experience:
+HEART is an acronym that stands for happiness, engagement, adoption, retention, and task success. It helps product teams deliver better software by focusing on five dimensions of customer experience:
-
--- **Happiness**: Measure of user attitude -- **Engagement**: Level of active user involvement
+- **Happiness**: Measure of user attitude
+- **Engagement**: Level of active user involvement
- **Adoption**: Target audience penetration-- **Retention**: Rate at which users return -- **Task Success**: Productivity empowerment -
-These dimensions are measured independently, but they interact with each other as shown below:
-
+- **Retention**: Rate at which users return
+- **Task success**: Productivity empowerment
+These dimensions are measured independently, but they interact with each other.
- Adoption, engagement, and retention form a user activity funnel. Only a portion of users who adopt the tool come back to use it. - Task success is the driver that progresses users down the funnel and moves them from adoption to retention.-- Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity should ideally be happier. -
+- Happiness is an outcome of the other dimensions and not a stand-alone measurement. Users who have progressed down the funnel and are showing a higher level of activity are ideally happier.
## Get started ### Prerequisites+
+ - **Azure subscription**: [Create an Azure subscription for free](https://azure.microsoft.com/free/).
+ - **Application Insights resource**: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource).
+ - **Specific attributes**: Instrument the following attributes to calculate HEART metrics.
| Source | Attribute | Description | |--|-|--|
These dimensions are measured independently, but they interact with each other a
| customEvents | itemType | Category of customEvents record | | customEvents | timestamp | Datetime of event | | customEvents | operation_Id | Correlate telemetry events |
- | customEvents | user_Id | Unique user identifier |
+ | customEvents | user_Id | Unique user identifier |
| customEvents* | parentId | Name of feature | | customEvents* | pageName | Name of page | | customEvents* | actionType | Category of Click Analytics record |
These dimensions are measured independently, but they interact with each other a
| pageViews | operation_Id | Correlate telemetry events | | pageViews | user_Id | Unique user identifier |
-*Use the [Click Analytics Auto collection plugin](javascript-feature-extensions.md) via npm to emit these attributes.
+*Use the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm to emit these attributes.
>[!TIP]
-> To understand how to effectively use the Click Analytics plugin, please refer to [this section](javascript-feature-extensions.md#use-the-plug-in).
-
-### Open the workbook
-The workbook can be found in the gallery under 'public templates'. The workbook will be shown in the section titled **"Product Analytics using the Click Analytics Plugin"** as shown in the following image:
--
-Users will notice that there are seven workbooks as shown in the following image:
+> To understand how to effectively use the Click Analytics plug-in, see [Feature extensions for the Application Insights JavaScript SDK (Click Analytics)](javascript-feature-extensions.md#use-the-plug-in).
+### Open the workbook
+You can find the workbook in the gallery under **Public Templates**. The workbook appears in the section **Product Analytics using the Click Analytics Plugin**.
-The workbook is designed in a way that users only have to interact with the main workbook, 'HEART Analytics - All Sections'. This workbook contains the rest of the six workbooks as tabs. If needed, users can access the individual workbooks related to reach tab through the gallery as well.
+There are seven workbooks.
-### Confirm data is flowing
+You only have to interact with the main workbook, **HEART Analytics - All Sections**. This workbook contains the other six workbooks as tabs. You can also access the individual workbooks related to each tab through the gallery.
-See the "Development Requirements" tab as shown below to validate that data is flowing as expected to light up the metrics accurately.
+### Confirm that data is flowing
+To validate that data is flowing as expected to light up the metrics accurately, select the **Development Requirements** tab.
-If the data isn't flowing as expected, this tab will highlight the specific attributes with issues as shown in the below example.
+If data isn't flowing as expected, this tab shows the specific attributes with issues.
## Workbook structure
-The workbook shows metric trends for the HEART dimensions split over eight tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them.
+The workbook shows metric trends for the HEART dimensions split over seven tabs. Each tab contains descriptions of the dimensions, the metrics contained within each dimension, and how to use them.
-A brief description of the tabs can be seen below:
+The tabs are:
-- **Summary tab** - Usage funnel metrics giving a high-level view of visits, interactions, and repeat usage. -- **Adoption** - This tab helps understand what is the penetration among the target audience, acquisition velocity, and total user base. -- **Engagement** - Frequency, depth, and breadth of usage. -- **Retention** - Repeat usage -- **Task success** - Enabling understanding of user flows and their time distributions. -- **Happiness** - We recommend using a survey tool to measure customer satisfaction score (CSAT) over a 5-point scale. In this tab, we've provided the likelihood of happiness by using usage and performance metrics. -- **Feature metrics** - Enables understanding of HEART metrics at feature granularity.
+- **Summary**: Summarizes usage funnel metrics for a high-level view of visits, interactions, and repeat usage.
+- **Adoption**: Helps you understand the penetration among the target audience, acquisition velocity, and total user base.
+- **Engagement**: Shows frequency, depth, and breadth of usage.
+- **Retention**: Shows repeat usage.
+- **Task success**: Enables understanding of user flows and their time distributions.
+- **Happiness**: We recommend using a survey tool to measure customer satisfaction score (CSAT) over a five-point scale. On this tab, we've provided the likelihood of happiness via usage and performance metrics.
+- **Feature metrics**: Enables understanding of HEART metrics at feature granularity.
> [!WARNING]
-> The HEART Workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics will be negatively affected by sampling and filtering.
+> The HEART workbook is currently built on logs and effectively are [log-based metrics](pre-aggregated-metrics-log-metrics.md). The accuracy of these metrics are negatively affected by sampling and filtering.
+ ## How HEART dimensions are defined and measured ### Happiness
-Happiness is a user-reported dimension that measures how users feel about the product offered to them.
-
-A common approach to measure happiness is to ask users a Customer Satisfaction (CSAT) question like *How satisfied are you with this product?*. Users' responses on a three or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score ranging from 1-5. Since user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at pre-defined intervals.
-
-Common happiness metrics include values such as *Average Star Rating* and *Customer Satisfaction Score*. Send these values to Azure Monitor using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources).
-
+Happiness is a user-reported dimension that measures how users feel about the product offered to them.
+A common approach to measure happiness is to ask users a CSAT question like How satisfied are you with this product?. Users' responses on a three- or a five-point scale (for example, *no, maybe,* and *yes*) are aggregated to create a product-level score that ranges from 1 to 5. Because user-initiated feedback tends to be negatively biased, HEART tracks happiness from surveys displayed to users at predefined intervals.
+Common happiness metrics include values such as **Average Star Rating** and **Customer Satisfaction Score**. Send these values to Azure Monitor by using one of the custom ingestion methods described in [Custom sources](../data-sources.md#custom-sources).
### Engagement
-Engagement is a measure of user activity, specifically intentional user actions such as clicks. Active usage can be broken down into three subdimensions:
-- **Activity frequency** ΓÇô Measures how often a user interacts with the product. For example, user typically interacts daily, weekly, or monthly.-- **Activity breadth** ΓÇô Measures the number of features users interact with over a given time period. For example, users interacted with a total of five features in June 2021.-- **Activity depth** ΓÇô Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch.
-Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, making it an important metric to track. But for a product like a paycheck portal, measurement would make more sense at a monthly or weekly level.
+Engagement is a measure of user activity. Specifically, user actions are intentional, such as clicks. Active usage can be broken down into three subdimensions:
->[!IMPORTANT]
->A user who does an intentional action such as clicking a button or typing an input is counted as an active user. For this reason, Engagement metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application.
+- **Activity frequency**: Measures how often a user interacts with the product. For example, users typically interact daily, weekly, or monthly.
+- **Activity breadth**: Measures the number of features users interact with over a specific time period. For example, users interacted with a total of five features in June 2021.
+- **Activity depth**: Measures the number of features users interact with each time they launch the product. For example, users interacted with two features on every launch.
+Measuring engagement can vary based on the type of product being used. For example, a product like Microsoft Teams is expected to have a high daily usage, which makes it an important metric to track. But for a product like a paycheck portal, measurement might make more sense at a monthly or weekly level.
+>[!IMPORTANT]
+>A user who performs an intentional action, such as clicking a button or typing an input, is counted as an active user. For this reason, engagement metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
### Adoption
-Adoption enables understanding of penetration among the relevant users, who we're gaining as our user base, and how we're gaining them. Adoption metrics are useful for measuring the below scenarios:
--- Newly released products -- Newly updated products -- Marketing campaigns -
+Adoption enables understanding of penetration among the relevant users, who you're gaining as your user base, and how you're gaining them. Adoption metrics are useful for measuring:
+- Newly released products.
+- Newly updated products.
+- Marketing campaigns.
### Retention
-A Retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics:
-| Metric | Definition | Question Answered |
+A retained user is a user who was active in a specified reporting period and its previous reporting period. Retention is typically measured with the following metrics.
+
+| Metric | Definition | Question answered |
|-|-|-|
-| Retained users | Count of active users who were also Active the previous period | How many users are staying engaged with the product? |
-| Retention | Proportion of active users from the previous period who are also Active this period | What percent of users are staying engaged with the product? |
+| Retained users | Count of active users who were also active the previous period | How many users are staying engaged with the product? |
+| Retention | Proportion of active users from the previous period who are also active this period | What percent of users are staying engaged with the product? |
>[!IMPORTANT]
->Since active users must have at least one telemetry event with an actionType, Retention metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application.
-
+>Because active users must have at least one telemetry event with an action type, retention metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
### Task success
-Task success tracks whether users can do a task efficiently and effectively using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include:
-- Add items to a cart and then complete a purchase-- Search a keyword and then click on a result-- Start a new account and then complete account registration+
+Task success tracks whether users can do a task efficiently and effectively by using the product's features. Many products include structures that are designed to funnel users through completing a task. Some examples include:
+
+- Adding items to a cart and then completing a purchase.
+- Searching a keyword and then selecting a result.
+- Starting a new account and then completing account registration.
A successful task meets three requirements:-- Expected task flow - The intended task flow of the feature was completed by the user and aligns with the expected task flow.-- High performance - The intended functionality of the feature was accomplished in a reasonable amount of time.-- High reliability ΓÇô The intended functionality of the feature was accomplished without failure.
+- **Expected task flow**: The intended task flow of the feature was completed by the user and aligns with the expected task flow.
+- **High performance**: The intended functionality of the feature was accomplished in a reasonable amount of time.
+- **High reliability**: The intended functionality of the feature was accomplished without failure.
-A task is considered unsuccessful if any of the above requirements isn't met.
+A task is considered unsuccessful if any of the preceding requirements isn't met.
>[!IMPORTANT]
->Task success metrics require the [Click Analytics plugin for Application Insights](javascript-feature-extensions.md) implemented in the application.
+>Task success metrics require the [Click Analytics plug-in for Application Insights](javascript-feature-extensions.md) to be implemented in the application.
-Set up a custom task using the below parameters.
+Set up a custom task by using the following parameters.
| Parameter | Description | |-|-|
-| First step | The feature that starts the task. Using the cart/purchase example above, "adding items to a cart" would be the First step. |
-| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint is considered a failure. Not all tasks necessarily have a time constraint: for such tasks, select "No Time Expectation". |
-| Last step | The feature that completes the task. Using the cart/purchase example above, "purchasing items from the cart" would be the Last step. |
---------
+| First step | The feature that starts the task. In the cart/purchase example, **Adding items to a cart** is the first step. |
+| Expected task duration | The time window to consider a completed task a success. Any tasks completed outside of this constraint are considered a failure. Not all tasks necessarily have a time constraint. For such tasks, select **No Time Expectation**. |
+| Last step | The feature that completes the task. In the cart/purchase example, **Purchasing items from the cart** is the last step. |
## Frequently asked questions
-### How do I view the data at different grains? (Daily, monthly, weekly)?
-You can click on the 'Date Grain' filter to change the grain (As shown below)
-
+### How do I view the data at different grains (daily, monthly, or weekly)?
+You can select the **Date Grain** filter to change the grain. The filter is available across all the dimension tabs.
### How do I access insights from my application that aren't available on the HEART workbooks?
-You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, navigate to 'Logs' under 'Monitoring' section and query the customEvents table. Some of the click analytics attributes are contained within the customDimensions field. A sample query is shown in the image below:
-
+You can dig into the data that feeds the HEART workbook if the visuals don't answer all your questions. To do this task, under the **Monitoring** section, select **Logs** and query the `customEvents` table. Some of the Click Analytics attributes are contained within the `customDimensions` field. A sample query is shown here.
-Navigate to the [Azure Monitor Logs Overview](../logs/data-platform-logs.md) page to learn more about Logs in Azure Monitor.
+To learn more about Logs in Azure Monitor, see [Azure Monitor Logs overview](../logs/data-platform-logs.md).
### Can I edit visuals in the workbook?
-Yes, when you click on the public template of the workbook, you can navigate to the top-left corner, click edit, and make your changes.
--
+Yes. When you select the public template of the workbook, select **Edit** and make your changes.
-After making your changes, click 'Done Editing' and then the 'Save' icon.
+After you make your changes, select **Done Editing**, and then select the **Save** icon.
-To view your saved workbook, navigate to the 'Workbooks' section under 'Monitoring', and then click on the 'Workbooks' tab instead of the 'Public templates' tab. You'll see a copy of your customized workbook there (Shown below). You can make any further changes you want on this particular copy.
+To view your saved workbook, under **Monitoring**, go to the **Workbooks** section and then select the **Workbooks** tab. A copy of your customized workbook appears there. You can make any further changes you want in this copy.
-For more on editing workbook templates, refer to the [Azure Workbook templates](../visualize/workbooks-templates.md) page.
--
-
+For more on editing workbook templates, see [Azure Workbooks templates](../visualize/workbooks-templates.md).
## Next steps-- Set up the [Click Analytics Auto Collection Plugin](javascript-feature-extensions.md) via npm.-- Check out the [GitHub Repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Auto Collection Plugin.-- Use [Events Analysis in Usage Experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.-- Find click data under content field within customDimensions attribute in CustomEvents table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See [Sample App](https://go.microsoft.com/fwlink/?linkid=2152871) for more guidance.-- Learn more about [Google's HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf).-
-
-
-
+- Set up the [Click Analytics Autocollection plug-in](javascript-feature-extensions.md) via npm.
+- Check out the [GitHub repository](https://github.com/microsoft/ApplicationInsights-JS/tree/master/extensions/applicationinsights-clickanalytics-js) and [npm Package](https://www.npmjs.com/package/@microsoft/applicationinsights-clickanalytics-js) for the Click Analytics Autocollection plug-in.
+- Use [Events Analysis in the Usage experience](usage-segmentation.md) to analyze top clicks and slice by available dimensions.
+- Find click data under the content field within the `customDimensions` attribute in the `CustomEvents` table in [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query). See a [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for more guidance.
+- Learn more about the [Google HEART framework](https://storage.googleapis.com/pub-tools-public-publication-data/pdf/36299.pdf).
azure-monitor Work Item Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/work-item-integration.md
To edit your template, go to the **Work Items** tab under *Configure* and select
:::image type="content" source="./media/work-item-integration/edit-template.png" alt-text=" Screenshot of work item tab with the edit pencil icon selected.":::
-Select edit ![edit icon](./medi). The work item information is generated using the keyword query language. You can modify the queries to add more context essential to your team. When you are done editing, save the workbook by selecting the save icon ![save icon](./media/work-item-integration/save-icon.png) in the top toolbar.
+Select edit :::image type="content" source="./media/work-item-integration/edit-icon.png" lightbox="./media/work-item-integration/edit-icon.png" alt-text="edit icon"::: in the top toolbar.
:::image type="content" source="./media/work-item-integration/edit-workbook.png" alt-text=" Screenshot of the work item template workbook in edit mode." lightbox="./media/work-item-integration/edit-workbook.png":::
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Your cluster must be configured to send metrics to [Azure Monitor managed servic
### Enable Prometheus alert rules
-The only method currently available for creating Prometheus alert rules is an Azure Resource Manager template (ARM template).
+The methods currently available for creating Prometheus alert rules are Azure Resource Manager template (ARM template) and Bicep template.
+
+### [ARM template](#tab/arm-template)
1. Download the template that includes the set of alert rules you want to enable. For a list of the rules for each, see [Alert rule details](#alert-rule-details). - [Community alerts](https://aka.ms/azureprometheus-communityalerts) - [Recommended alerts](https://aka.ms/azureprometheus-recommendedalerts)
-1. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates).
+2. Deploy the template by using any standard methods for installing ARM templates. For guidance, see [ARM template samples for Azure Monitor](../resource-manager-samples.md#deploy-the-sample-templates).
+
+### [Bicep template](#tab/bicep)
+
+1. To deploy community and recommended alerts, follow this [template](https://aka.ms/azureprometheus-alerts-bicep) and follow the README.md file in the same folder for how to deploy.
> [!NOTE] > Although you can create the Prometheus alert in a resource group different from the target resource, use the same resource group as your target resource.
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
- Title: Azure Monitor supported metrics by resource type
-description: List of metrics available for each resource type with Azure Monitor.
---- Previously updated : 04/02/2023----
-# Supported metrics with Azure Monitor
-
-> [!NOTE]
-> This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-
-Date list was last updated: 04/02/2023.
-
-Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
-
-This article is a complete list of all platform (that is, automatically collected) metrics currently available with the consolidated metric pipeline in Azure Monitor. Metrics changed or added after the date at the top of this article might not yet appear in the list. To query for and access the list of metrics programmatically, use the [2018-01-01 api-version](/rest/api/monitor/metricdefinitions). Other metrics not in this list might be available in the portal or through legacy APIs.
-
-The metrics are organized by resource provider and resource type. For a list of services and the resource providers and types that belong to them, see [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md).
-
-## Exporting platform metrics to other locations
-
-You can export the platform metrics from the Azure monitor pipeline to other locations in one of two ways:
--- Use the [metrics REST API](/rest/api/monitor/metrics/list).-- Use [diagnostic settings](../essentials/diagnostic-settings.md) to route platform metrics to:
- - Azure Storage.
- - Azure Monitor Logs (and thus Log Analytics).
- - Event hubs, which is how you get them to non-Microsoft systems.
-
-Using diagnostic settings is the easiest way to route the metrics, but there are some limitations:
--- **Exportability**. All metrics are exportable through the REST API, but some can't be exported through diagnostic settings because of intricacies in the Azure Monitor back end. The column "Exportable via Diagnostic Settings" in the following tables lists which metrics can be exported in this way. --- **Multi-dimensional metrics**. Sending multi-dimensional metrics to other locations via diagnostic settings is not currently supported. Metrics with dimensions are exported as flattened single-dimensional metrics, aggregated across dimension values. -
- For example, the *Incoming Messages* metric on an event hub can be explored and charted on a per-queue level. But when the metric is exported via diagnostic settings, it will be represented as all incoming messages across all queues in the event hub.
-
-## Guest OS and host OS metrics
-
-Metrics for the guest operating system (guest OS) that runs in Azure Virtual Machines, Service Fabric, and Cloud Services are *not* listed here. Guest OS metrics must be collected through one or more agents that run on or as part of the guest operating system. Guest OS metrics include performance counters that track guest CPU percentage or memory usage, both of which are frequently used for autoscaling or alerting.
-
-Host OS metrics *are* available and listed in the tables. Host OS metrics relate to the Hyper-V session that's hosting your guest OS session.
-
-> [!TIP]
-> A best practice is to use and configure the Azure Monitor agent to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The agent routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. You can then chart, alert, and otherwise use guest OS metrics like platform metrics.
->
-> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics. Standard [Log Analytics workspace costs](https://azure.microsoft.com/pricing/details/monitor/) would then apply.
-
-The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analytics agent, which were previously used for guest OS routing. For important additional information, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
-
-## Table formatting
-
-This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
-+
+ Title: Azure Monitor supported metrics by resource type
+description: List of metrics available for each resource type with Azure Monitor.
++++ Last updated : 04/13/2023++++
+# Supported metrics with Azure Monitor
+
+> [!NOTE]
+> This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
+
+Date list was last updated: 04/13/2023.
+
+Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
+
+This article is a complete list of all platform (that is, automatically collected) metrics currently available with the consolidated metric pipeline in Azure Monitor. Metrics changed or added after the date at the top of this article might not yet appear in the list. To query for and access the list of metrics programmatically, use the [2018-01-01 api-version](/rest/api/monitor/metricdefinitions). Other metrics not in this list might be available in the portal or through legacy APIs.
+
+The metrics are organized by resource provider and resource type. For a list of services and the resource providers and types that belong to them, see [Resource providers for Azure services](../../azure-resource-manager/management/azure-services-resource-providers.md).
+
+## Exporting platform metrics to other locations
+
+You can export the platform metrics from the Azure monitor pipeline to other locations in one of two ways:
+
+- Use the [metrics REST API](/rest/api/monitor/metrics/list).
+- Use [diagnostic settings](../essentials/diagnostic-settings.md) to route platform metrics to:
+ - Azure Storage.
+ - Azure Monitor Logs (and thus Log Analytics).
+ - Event hubs, which is how you get them to non-Microsoft systems.
+
+Using diagnostic settings is the easiest way to route the metrics, but there are some limitations:
+
+- **Exportability**. All metrics are exportable through the REST API, but some can't be exported through diagnostic settings because of intricacies in the Azure Monitor back end. The column "Exportable via Diagnostic Settings" in the following tables lists which metrics can be exported in this way.
+
+- **Multi-dimensional metrics**. Sending multi-dimensional metrics to other locations via diagnostic settings is not currently supported. Metrics with dimensions are exported as flattened single-dimensional metrics, aggregated across dimension values.
+
+ For example, the *Incoming Messages* metric on an event hub can be explored and charted on a per-queue level. But when the metric is exported via diagnostic settings, it will be represented as all incoming messages across all queues in the event hub.
+
+## Guest OS and host OS metrics
+
+Metrics for the guest operating system (guest OS) that runs in Azure Virtual Machines, Service Fabric, and Cloud Services are *not* listed here. Guest OS metrics must be collected through one or more agents that run on or as part of the guest operating system. Guest OS metrics include performance counters that track guest CPU percentage or memory usage, both of which are frequently used for autoscaling or alerting.
+
+Host OS metrics *are* available and listed in the tables. Host OS metrics relate to the Hyper-V session that's hosting your guest OS session.
+
+> [!TIP]
+> A best practice is to use and configure the Azure Monitor agent to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The agent routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. You can then chart, alert, and otherwise use guest OS metrics like platform metrics.
+>
+> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics. Standard [Log Analytics workspace costs](https://azure.microsoft.com/pricing/details/monitor/) would then apply.
+
+The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analytics agent, which were previously used for guest OS routing. For important additional information, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
+
+## Table formatting
+
+This latest update adds a new column and reorders the metrics to be alphabetical. The additional information means that the tables might have a horizontal scroll bar at the bottom, depending on the width of your browser window. If you seem to be missing information, use the scroll bar to see the entirety of the table.
## Microsoft.AAD/DomainServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|\Security System-Wide Statistics\Kerberos Authentications |Yes |Kerberos Authentications |CountPerSecond |Average |This metric indicates the number of times that clients use a ticket to authenticate to this computer per second. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | |\Security System-Wide Statistics\NTLM Authentications |Yes |NTLM Authentications |CountPerSecond |Average |This metric indicates the number of NTLM authentications processed per second for the Active Directory on this domain contrller or for local accounts on this member server. It is backed by performance counter data from the domain controller, and can be filtered or splitted by role instance. |DataCenter, Tenant, Role, RoleInstance, ScaleUnit | - ## microsoft.aadiam/azureADMetrics <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SamlFailureCount |Yes |SamlFailureCount |Count |Count |Saml token failure count for relying party scenario |No Dimensions | |SamlSuccessCount |Yes |SamlSuccessCount |Count |Count |Saml token scuccess count for relying party scenario |No Dimensions | - ## Microsoft.AnalysisServices/servers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VertiPaqPaged |Yes |Memory: VertiPaq Paged |Bytes |Average |Bytes of paged memory in use for in-memory data. |ServerResourceType | |virtual_bytes_metric |Yes |Virtual Bytes |Bytes |Average |Virtual bytes. |ServerResourceType | - ## Microsoft.ApiManagement/service <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UnauthorizedRequests |Yes |Unauthorized Gateway Requests (Deprecated) |Count |Total |Number of unauthorized gateway requests - Use multi-dimension request metric with GatewayResponseCodeCategory dimension instead |Location, Hostname | |WebSocketMessages |Yes |WebSocket Messages (Preview) |Count |Total |Count of WebSocket messages based on selected source and destination |Location, Source, Destination | - ## Microsoft.App/containerapps <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UsageNanoCores |Yes |CPU Usage |NanoCores |Average |CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core |revisionName, podName | |WorkingSetBytes |Yes |Memory Working Set Bytes |Bytes |Average |Container App working set memory used in bytes. |revisionName, podName | - ## Microsoft.App/managedEnvironments <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|EnvCoresQuotaLimit |Yes |Cores Quota Limit |Count |Average |The cores quota limit of managed environment |No Dimensions | |EnvCoresQuotaUtilization |Yes |Percentage Cores Used Out Of Limit |Percent |Average |The cores quota utilization of managed environment |No Dimensions | - ## Microsoft.AppConfiguration/configurationStores <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|HttpIncomingRequestDuration |Yes |HttpIncomingRequestDuration |Count |Average |Latency on an http request. |StatusCode, Authentication, Endpoint | |ThrottledHttpRequestCount |Yes |ThrottledHttpRequestCount |Count |Total |Throttled http requests. |Endpoint | - ## Microsoft.AppPlatform/Spring <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|working-set |Yes |working-set |Count |Average |Amount of working set used by the process (MB) |Deployment, AppName, Pod | |WorkingSetBytes |Yes |Memory Working Set Bytes |Bytes |Average |Spring App working set memory used in bytes. |containerAppName, podName | - ## Microsoft.Automation/automationAccounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalUpdateDeploymentMachineRuns |Yes |Total Update Deployment Machine Runs |Count |Total |Total software update deployment machine runs in a software update deployment run |Status, TargetComputer, SoftwareUpdateConfigurationName, SoftwareUpdateConfigurationRunId | |TotalUpdateDeploymentRuns |Yes |Total Update Deployment Runs |Count |Total |Total software update deployment runs |Status, SoftwareUpdateConfigurationName | - ## microsoft.avs/privateClouds <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UsageAverage |Yes |Average Memory Usage |Percent |Average |Memory usage as percentage of total configured or available memory |clustername | |UsedLatest |Yes |Datastore Disk Used |Bytes |Average |The total amount of disk used in the datastore |dsname | - ## microsoft.azuresphere/catalogs <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|DeviceAttestationCount |Yes |Device Attestation Requests |Count |Count |Count of all the requests sent by an Azure Sphere device for authentication and attestation. |DeviceId, CatalogId, StatusCodeClass | |DeviceErrorCount |Yes |Device Errors |Count |Count |Count of all the errors encountered by an Azure Sphere device. |DeviceId, CatalogId, ErrorCategory, ErrorClass, ErrorType | - ## Microsoft.Batch/batchaccounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UnusableNodeCount |No |Unusable Node Count |Count |Total |Number of unusable nodes |No Dimensions | |WaitingForStartTaskNodeCount |No |Waiting For Start Task Node Count |Count |Total |Number of nodes waiting for the Start Task to complete |No Dimensions | - ## microsoft.bing/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalCalls |Yes |Total Calls |Count |Total |Total number of calls |ApiName, ServingRegion, StatusCode | |TotalErrors |Yes |Total Errors |Count |Total |Number of calls with any error (HTTP status code 4xx or 5xx) |ApiName, ServingRegion, StatusCode | - ## microsoft.botservice/botservices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Request Latency |Milliseconds |Total |Time taken by the server to process the request |Operation, Authentication, Protocol, DataCenter | |RequestsTraffic |Yes |Requests Traffic |Percent |Count |Number of Requests Made |Operation, Authentication, Protocol, StatusCode, StatusCodeClass, DataCenter | - ## Microsoft.BotService/botServices/channels <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.BotService/botServices/connections <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.BotService/checknameavailability <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.BotService/hostsettings <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.BotService/listauthserviceproviders <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.BotService/listqnamakerendpointkeys <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency |Yes |Requests Latencies |Milliseconds |Average |How long it takes to get request response |Operation, Authentication, Protocol, ResourceId, Region | |RequestsTraffic |Yes |Requests Traffic |Count |Average |Number of requests within a given period of time |Operation, Authentication, Protocol, ResourceId, Region, StatusCode, StatusCodeClass, StatusText | - ## Microsoft.Cache/redis <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|usedmemoryRss8 |Yes |Used Memory RSS (Shard 8) |Bytes |Maximum |The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics. |No Dimensions | |usedmemoryRss9 |Yes |Used Memory RSS (Shard 9) |Bytes |Maximum |The amount of cache memory used in MB, including fragmentation and metadata. For more details, see https://aka.ms/redis/metrics. |No Dimensions | - ## Microsoft.Cache/redisEnterprise <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|usedmemory |Yes |Used Memory |Bytes |Maximum |The amount of cache memory used for key/value pairs in the cache in MB. For more details, see https://aka.ms/redis/enterprise/metrics. |No Dimensions | |usedmemorypercentage |Yes |Used Memory Percentage |Percent |Maximum |The percentage of cache memory used for key/value pairs. For more details, see https://aka.ms/redis/enterprise/metrics. |InstanceId | - ## Microsoft.Cdn/cdnwebapplicationfirewallpolicies <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |WebApplicationFirewallRequestCount |Yes |Web Application Firewall Request Count |Count |Total |The number of client requests processed by the Web Application Firewall |PolicyName, RuleName, Action | - ## Microsoft.Cdn/profiles <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalLatency |Yes |Total Latency |MilliSeconds |Average |The time calculated from when the client request was received by the HTTP/S proxy until the client acknowledged the last response byte from the HTTP/S proxy |HttpStatus, HttpStatusGroup, ClientRegion, ClientCountry, Endpoint | |WebApplicationFirewallRequestCount |Yes |Web Application Firewall Request Count |Count |Total |The number of client requests processed by the Web Application Firewall |PolicyName, RuleName, Action | - ## Microsoft.ClassicCompute/domainNames/slots/roles <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out |Yes |Network Out |Bytes |Total |The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic). |RoleInstanceId | |Percentage CPU |Yes |Percentage CPU |Percent |Average |The percentage of allocated compute units that are currently in use by the Virtual Machine(s). |RoleInstanceId | - ## Microsoft.ClassicCompute/virtualMachines <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out |Yes |Network Out |Bytes |Total |The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic). |No Dimensions | |Percentage CPU |Yes |Percentage CPU |Percent |Average |The percentage of allocated compute units that are currently in use by the Virtual Machine(s). |No Dimensions | - ## Microsoft.ClassicStorage/storageAccounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication | |UsedCapacity |No |Used capacity |Bytes |Average |Account used capacity |No Dimensions | - ## Microsoft.ClassicStorage/storageAccounts/blobServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |Milliseconds |Average |The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication | - ## Microsoft.ClassicStorage/storageAccounts/fileServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |Milliseconds |Average |The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication, FileShare | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, FileShare | - ## Microsoft.ClassicStorage/storageAccounts/queueServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |Milliseconds |Average |The latency used by Azure Storage to process a successful request, in milliseconds. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication | - ## Microsoft.ClassicStorage/storageAccounts/tableServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TableEntityCount |No |Table Entity Count |Count |Average |The number of table entities in the storage account's Table service. |No Dimensions | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication | - ## Microsoft.Cloudtest/hostedpools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Starting |Yes |Starting |Count |Average |Resources that are starting |PoolId, SKU, Images, ProviderName | |Total |Yes |Total |Count |Average |Total Number of Resources |PoolId, SKU, Images, ProviderName | - ## Microsoft.Cloudtest/pools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Starting |Yes |Starting |Count |Average |Resources that are starting |PoolId, SKU, Images, ProviderName | |Total |Yes |Total |Count |Average |Total Number of Resources |PoolId, SKU, Images, ProviderName | - ## Microsoft.ClusterStor/nodes <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalRead |No |TotalRead |BytesPerSecond |Average |The total lustre file system read per second |filesystem_name, category, system | |TotalWrite |No |TotalWrite |BytesPerSecond |Average |The total lustre file system write per second |filesystem_name, category, system | - ## Microsoft.CodeSigning/codesigningaccounts <!-- Data source : naam--> |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |SignCompleted |Yes |SignCompleted |Count |Count |Completed Sign Request |CertType, Region, TenantId |
-|SignFailed |Yes |SignFailed |Count |Count |Failed Sign Request |CertType, Region, TenantId |
- ## Microsoft.CognitiveServices/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VoiceModelHostingHours |Yes |Voice Model Hosting Hours |Count |Total |Number of Hours. |ApiName, FeatureName, UsageChannel, Region | |VoiceModelTrainingMinutes |Yes |Voice Model Training Minutes |Count |Total |Number of Minutes. |ApiName, FeatureName, UsageChannel, Region | - ## Microsoft.Communication/CommunicationServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|DeliveryStatusUpdate |Yes |Email Service Delivery Status Updates |Count |Count |Email Communication Services message delivery results. |MessageStatus, Result | |UserEngagement |Yes |Email Service User Engagement |Count |Count |Email Communication Services user engagement metrics. |EngagementType | - ## Microsoft.Compute/cloudservices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out Total |Yes |Network Out Total |Bytes |Total |The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic) |RoleInstanceId, RoleId | |Percentage CPU |Yes |Percentage CPU |Percent |Average |The percentage of allocated compute units that are currently in use by the Virtual Machine(s) |RoleInstanceId, RoleId | - ## Microsoft.Compute/cloudServices/roles <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Network Out Total |Yes |Network Out Total |Bytes |Total |The number of bytes out on all network interfaces by the Virtual Machine(s) (Outgoing Traffic) |RoleInstanceId, RoleId | |Percentage CPU |Yes |Percentage CPU |Percent |Average |The percentage of allocated compute units that are currently in use by the Virtual Machine(s) |RoleInstanceId, RoleId | - ## microsoft.compute/disks <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Composite Disk Write Operations/sec |No |Disk Write Operations/sec(Preview) |CountPerSecond |Average |Number of Write IOs performed on a disk during monitoring period, please note, this metric is in preview and is subject to change before becoming generally available |No Dimensions | |DiskPaidBurstIOPS |No |Disk On-demand Burst Operations(Preview) |Count |Average |The accumulated operations of burst transactions used for disks with on-demand burst enabled. Emitted on an hour interval |No Dimensions | - ## Microsoft.Compute/virtualMachines <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Uncached IOPS Consumed Percentage |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM. Only available on VM series that support premium storage. |No Dimensions | |VmAvailabilityMetric |Yes |VM Availability Metric (Preview) |Count |Average |Measure of Availability of Virtual machines over time. |No Dimensions | - ## Microsoft.Compute/virtualmachineScaleSets <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Uncached IOPS Consumed Percentage |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM |VMName | |VmAvailabilityMetric |Yes |VM Availability Metric (Preview) |Count |Average |Measure of Availability of Virtual machines over time. |VMName | - ## Microsoft.Compute/virtualMachineScaleSets/virtualMachines <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VM Uncached Bandwidth Consumed Percentage |Yes |VM Uncached Bandwidth Consumed Percentage |Percent |Average |Percentage of uncached disk bandwidth consumed by the VM |No Dimensions | |VM Uncached IOPS Consumed Percentage |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM |No Dimensions | - ## Microsoft.ConnectedCache/CacheNodes <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|misses |Yes |Misses |Count |Count |Count of misses |cachenodeid | |missesbps |Yes |Miss Mbps |BitsPerSecond |Average |Miss Throughput |cachenodeid | - ## Microsoft.ConnectedCache/ispCustomers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|missesbps |Yes |Miss Mbps |BitsPerSecond |Average |Miss Throughput |cachenodeid | |outboundbps |Yes |Outbound |BitsPerSecond |Average |Outbound Throughput |cachenodeid | - ## Microsoft.ConnectedVehicle/platformAccounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|StateStoreWriteRequestLatency |Yes |State store write execution time |Milliseconds |Average |State store write request execution time average in milliseconds. |ExtensionName, IsSuccessful, FailureCategory | |StateStoreWriteRequests |Yes |State store write requests |Count |Total |Number of write requests to state store |ExtensionName, IsSuccessful, FailureCategory | - ## Microsoft.ContainerInstance/containerGroups <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|NetworkBytesReceivedPerSecond |Yes |Network Bytes Received Per Second |Bytes |Average |The network bytes received per second. |No Dimensions | |NetworkBytesTransmittedPerSecond |Yes |Network Bytes Transmitted Per Second |Bytes |Average |The network bytes transmitted per second. |No Dimensions | - ## Microsoft.ContainerRegistry/registries <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalPullCount |Yes |Total Pull Count |Count |Total |Number of image pulls in total |No Dimensions | |TotalPushCount |Yes |Total Push Count |Count |Total |Number of image pushes in total |No Dimensions | - ## Microsoft.ContainerService/managedClusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|node_network_in_bytes |Yes |Network In Bytes |Bytes |Average |Network received bytes |node, nodepool | |node_network_out_bytes |Yes |Network Out Bytes |Bytes |Average |Network transmitted bytes |node, nodepool | - ## Microsoft.CustomProviders/resourceproviders <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|FailedRequests |Yes |Failed Requests |Count |Total |Gets the available logs for Custom Resource Providers |HttpMethod, CallPath, StatusCode | |SuccessfullRequests |Yes |Successful Requests |Count |Total |Successful requests made by the custom provider |HttpMethod, CallPath, StatusCode | - ## Microsoft.Dashboard/grafana <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |HttpRequestCount |No |HttpRequestCount |Count |Count |Number of HTTP requests to Azure Managed Grafana server |No Dimensions | - ## Microsoft.DataBoxEdge/dataBoxEdgeDevices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|NICWriteThroughput |Yes |Write Throughput (Network) |BytesPerSecond |Average |The write throughput of the network interface on the device in the reporting period for all volumes in the gateway. |InstanceName | |TotalCapacity |Yes |Total Capacity |Bytes |Average |The total capacity of the device in bytes during the reporting period. |No Dimensions | - ## Microsoft.DataCollaboration/workspaces <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ProposalCount |Yes |Created Proposals |Count |Maximum |Number of created proposals |ProposalName | |ScriptCount |Yes |Created Scripts |Count |Maximum |Number of created scripts |ScriptName | - ## Microsoft.DataFactory/datafactories <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|FailedRuns |Yes |Failed Runs |Count |Total |Failed Runs |pipelineName, activityName | |SuccessfulRuns |Yes |Successful Runs |Count |Total |Successful Runs |pipelineName, activityName | - ## Microsoft.DataFactory/factories <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggerFailedRuns |Yes |Failed trigger runs metrics |Count |Total |Failed trigger runs metrics |Name, FailureType | |TriggerSucceededRuns |Yes |Succeeded trigger runs metrics |Count |Total |Succeeded trigger runs metrics |Name, FailureType | - ## Microsoft.DataLakeAnalytics/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|JobEndedSuccess |Yes |Successful Jobs |Count |Total |Count of successful jobs. |No Dimensions | |JobStage |Yes |Jobs in Stage |Count |Total |Number of jobs in each stage. |No Dimensions | - ## Microsoft.DataLakeStore/accounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalStorage |Yes |Total Storage |Bytes |Maximum |Total amount of data stored in the account. |No Dimensions | |WriteRequests |Yes |Write Requests |Count |Total |Count of data write requests to the account. |No Dimensions | - ## Microsoft.DataProtection/BackupVaults <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BackupHealthEvent |Yes |Backup Health Events (preview) |Count |Count |The count of health events pertaining to backup job health |dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName | |RestoreHealthEvent |Yes |Restore Health Events (preview) |Count |Count |The count of health events pertaining to restore job health |dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName | - ## Microsoft.DataShare/accounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SucceededShareSubscriptionSynchronizations |Yes |Received Share Succeeded Snapshots |Count |Count |Number of received share succeeded snapshots in the account |No Dimensions | |SucceededShareSynchronizations |Yes |Sent Share Succeeded Snapshots |Count |Count |Number of sent share succeeded snapshots in the account |No Dimensions | - ## Microsoft.DBforMariaDB/servers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_percent |Yes |Storage percent |Percent |Average |Storage percent |No Dimensions | |storage_used |Yes |Storage used |Bytes |Average |Storage used |No Dimensions | - ## Microsoft.DBforMySQL/flexibleServers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_used |Yes |Storage Used |Bytes |Maximum |Storage Used |No Dimensions | |total_connections |Yes |Total Connections |Count |Total |Total Connections |No Dimensions | - ## Microsoft.DBforMySQL/servers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_percent |Yes |Storage percent |Percent |Average |Storage percent |No Dimensions | |storage_used |Yes |Storage used |Bytes |Average |Storage used |No Dimensions | - ## Microsoft.DBforPostgreSQL/flexibleServers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|xact_rollback |Yes |Transactions Rolled Back (Preview) |Count |Total |Number of transactions in this database that have been rolled back |DatabaseName | |xact_total |Yes |Total Transactions (Preview) |Count |Total |Number of total transactions executed in this database |DatabaseName | - ## Microsoft.DBForPostgreSQL/serverGroupsv2 <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|vm_uncached_bandwidth_percent |Yes |VM Uncached Bandwidth Consumed Percentage |Percent |Average |Percentage of uncached disk bandwidth consumed by the VM |ServerName | |vm_uncached_iops_percent |Yes |VM Uncached IOPS Consumed Percentage |Percent |Average |Percentage of uncached disk IOPS consumed by the VM |ServerName | - ## Microsoft.DBforPostgreSQL/servers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_percent |Yes |Storage percent |Percent |Average |Storage percent |No Dimensions | |storage_used |Yes |Storage used |Bytes |Average |Storage used |No Dimensions | - ## Microsoft.DBforPostgreSQL/serversv2 <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_percent |Yes |Storage percent |Percent |Average |Storage percent |No Dimensions | |storage_used |Yes |Storage used |Bytes |Average |Storage used |No Dimensions | - ## Microsoft.Devices/IotHubs <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|twinQueries.resultSize |Yes |Twin queries result size |Bytes |Average |The average, min, and max of the result size of all successful twin queries. |No Dimensions | |twinQueries.success |Yes |Successful twin queries |Count |Total |The count of all successful twin queries. |No Dimensions | - ## Microsoft.Devices/provisioningServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|DeviceAssignments |Yes |Devices assigned |Count |Total |Number of devices assigned to an IoT hub |ProvisioningServiceName, IotHubName | |RegistrationAttempts |Yes |Registration attempts |Count |Total |Number of device registrations attempted |ProvisioningServiceName, IotHubName, Status | - ## Microsoft.DigitalTwins/digitalTwinsInstances <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RoutingLatency |Yes |Routing Latency |Milliseconds |Average |Time elapsed between an event getting routed from Azure Digital Twins to when it is posted to the endpoint Azure service such as Event Hub, Service Bus or Event Grid. |EndpointType, Result | |TwinCount |Yes |Twin Count |Count |Total |Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the service limit for max number of twins allowed per instance. |No Dimensions | - ## Microsoft.DocumentDB/cassandraClusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ethtool_tx_packets |No |network transmitted packets |Count |Total |network transmitted packets |ClusterResourceName, DataCenterResourceName, Address, Kind | |percent_mem |Yes |memory utilization |Percent |Average |Memory utilization rate |ClusterResourceName, DataCenterResourceName, Address | - ## Microsoft.DocumentDB/DatabaseAccounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UpdateAccountReplicationSettings |Yes |Account Replication Settings Updated |Count |Count |Account Replication Settings Updated |No Dimensions | |UpdateDiagnosticsSettings |No |Account Diagnostic Settings Updated |Count |Count |Account Diagnostic Settings Updated |DiagnosticSettingsName, ResourceGroupName | - ## Microsoft.DocumentDB/mongoClusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|StoragePercent |No |Storage percent |Percent |Average |Percent of available storage used on node |ServerName | |StorageUsed |No |Storage used |Bytes |Average |Quantity of available storage used on node |ServerName | - ## microsoft.edgezones/edgezones <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalVcoreCapacity |Yes |Total VCore Capacity |Count |Average |The total capacity of the General-Purpose Compute vcore in Edge Zone Enterprise site. |No Dimensions | |VcoresUsage |Yes |Vcore Usage Percentage |Percent |Average |The utilization of the General-Purpose Compute vcores in Edge Zone Enterprise site |No Dimensions | - ## Microsoft.EventGrid/domains <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessCount |Yes |Published Events |Count |Total |Total events published to this topic |Topic | |PublishSuccessLatencyInMs |Yes |Publish Success Latency |MilliSeconds |Total |Publish success latency in milliseconds |No Dimensions | - ## Microsoft.EventGrid/eventSubscriptions <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|DroppedEventCount |Yes |Dropped Events |Count |Total |Total dropped events matching to this event subscription |DropReason | |MatchedEventCount |Yes |Matched Events |Count |Total |Total events matched to this event subscription |No Dimensions | - ## Microsoft.EventGrid/extensionTopics <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessLatencyInMs |Yes |Publish Success Latency |Milliseconds |Total |Publish success latency in milliseconds |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this topic |No Dimensions | - ## Microsoft.EventGrid/partnerNamespaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessLatencyInMs |Yes |Publish Success Latency |MilliSeconds |Total |Publish success latency in milliseconds |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the partner topics |No Dimensions | - ## Microsoft.EventGrid/partnerTopics <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessCount |Yes |Published Events |Count |Total |Total events published to this partner topic |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this partner topic |No Dimensions | - ## Microsoft.EventGrid/systemTopics <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessLatencyInMs |Yes |Publish Success Latency |Milliseconds |Total |Publish success latency in milliseconds |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this topic |No Dimensions | - ## Microsoft.EventGrid/topics <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PublishSuccessLatencyInMs |Yes |Publish Success Latency |MilliSeconds |Total |Publish success latency in milliseconds |No Dimensions | |UnmatchedEventCount |Yes |Unmatched Events |Count |Total |Total events not matching any of the event subscriptions for this topic |No Dimensions | - ## Microsoft.EventHub/clusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ThrottledRequests |No |Throttled Requests. |Count |Total |Throttled Requests for Microsoft.EventHub. |OperationResult | |UserErrors |No |User Errors. |Count |Total |User Errors for Microsoft.EventHub. |OperationResult | - ## Microsoft.EventHub/Namespaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ThrottledRequests |No |Throttled Requests. |Count |Total |Throttled Requests for Microsoft.EventHub. |EntityName, OperationResult | |UserErrors |No |User Errors. |Count |Total |User Errors for Microsoft.EventHub. |EntityName, OperationResult | - ## Microsoft.HDInsight/clusters <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PendingCPU |Yes |Pending CPU |Count |Maximum |Pending CPU Requests in YARN |No Dimensions | |PendingMemory |Yes |Pending Memory |Count |Maximum |Pending Memory Requests in YARN |No Dimensions | - ## Microsoft.HealthcareApis/services <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalLatency |Yes |Total Latency |Milliseconds |Average |The response latency of the service. |Protocol | |TotalRequests |Yes |Total Requests |Count |Sum |The total number of requests received by the service. |Protocol | - ## Microsoft.HealthcareApis/workspaces/analyticsconnectors <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|AnalyticsConnectorSuccessfulResourceCount |Yes |Analytics Connector Successful Resource Count |Count |Sum |The amount of data successfully processed by the analytics connector |No Dimensions | |AnalyticsConnectorTotalError |Yes |Analytics Connector Total Error Count |Count |Sum |The total number of errors logged by the analytics connector |ErrorType, Operation | - ## Microsoft.HealthcareApis/workspaces/fhirservices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalLatency |Yes |Total Latency |Milliseconds |Average |The response latency of the service. |Protocol | |TotalRequests |Yes |Total Requests |Count |Sum |The total number of requests received by the service. |Protocol | - ## Microsoft.HealthcareApis/workspaces/iotconnectors <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|NormalizedEvent |Yes |Number of Normalized Messages |Count |Sum |The total number of mapped normalized values outputted from the normalization stage of the MedTech service |Operation, ResourceName | |TotalErrors |Yes |Total Error Count |Count |Sum |The total number of errors logged by the MedTech service |Name, Operation, ErrorType, ErrorSeverity, ResourceName | - ## Microsoft.HybridContainerService/provisionedClusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |capacity_cpu_cores |Yes |Total number of cpu cores in a provisioned cluster |Count |Average |Total number of cpu cores in a provisioned cluster |No Dimensions | - ## microsoft.hybridnetwork/networkfunctions <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |HyperVVirtualProcessorUtilization |Yes |Average CPU Utilization |Percent |Average |Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU. |InstanceName | - ## microsoft.hybridnetwork/virtualnetworkfunctions <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |HyperVVirtualProcessorUtilization |Yes |Average CPU Utilization |Percent |Average |Total average percentage of virtual CPU utilization at one minute interval. The total number of virtual CPU is based on user configured value in SKU definition. Further filter can be applied based on RoleName defined in SKU. |InstanceName | - ## microsoft.insights/autoscalesettings <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ObservedMetricValue |Yes |Observed Metric Value |Count |Average |The value computed by autoscale when executed |MetricTriggerSource | |ScaleActionsInitiated |Yes |Scale Actions Initiated |Count |Total |The direction of the scale operation. |ScaleDirection | - ## microsoft.insights/components <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|requests/rate |No |Server request rate |CountPerSecond |Average |Rate of server requests per second |request/performanceBucket, request/resultCode, operation/synthetic, cloud/roleInstance, request/success, cloud/roleName | |traces/count |Yes |Traces |Count |Count |Trace document count |trace/severityLevel, operation/synthetic, cloud/roleName, cloud/roleInstance | -
-## Microsoft.Insights/datacollectionrules
-<!-- Data source : naam-->
-
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|ApiCallReceived_Count |Yes |Request Received |Count |Count |Number of requests received via Log Ingestion API or from the agent |InputStreamId, ResponseCode |
-|RowsDropped_Count |Yes |Rows Dropped |Count |Total |Number of rows dropped while running transformation. |InputStreamId |
-|RowsReceived_Count |Yes |Rows Received |Count |Total |Total number of rows recevied for transformation. |InputStreamId |
-|TransformationErrors_Count |Yes |Transformation Errors |Count |Count |The number of times when execution of KQL transformation resulted in an error, e.g. KQL syntax error or going over a service limit. |InputStreamId, ErrorType |
-|TransformationRuntime_DurationMs |Yes |Transformation Runtime Duration |MilliSeconds |Average |Total time taken to transform given set of records, measured in milliseconds. |InputStreamId |
-- ## Microsoft.IoTCentral/IoTApps <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|deviceDataUsage |Yes |Total Device Data Usage |Bytes |Total |Bytes transferred to and from any devices connected to IoT Central application |No Dimensions | |provisionedDeviceCount |No |Total Provisioned Devices |Count |Average |Number of devices provisioned in IoT Central application |No Dimensions | - ## microsoft.keyvault/managedhsms <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ServiceApiHit |Yes |Total Service Api Hits |Count |Count |Number of total service api hits |ActivityType, ActivityName | |ServiceApiLatency |No |Overall Service Api Latency |Milliseconds |Average |Overall latency of service api requests |ActivityType, ActivityName, StatusCode, StatusCodeClass | - ## Microsoft.KeyVault/vaults <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ServiceApiLatency |Yes |Overall Service Api Latency |MilliSeconds |Average |Overall latency of service api requests |ActivityType, ActivityName, StatusCode, StatusCodeClass | |ServiceApiResult |Yes |Total Service Api Results |Count |Count |Number of total service api results |ActivityType, ActivityName, StatusCode, StatusCodeClass | - ## microsoft.kubernetes/connectedClusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |capacity_cpu_cores |Yes |Total number of cpu cores in a connected cluster |Count |Total |Total number of cpu cores in a connected cluster |No Dimensions | - ## microsoft.kubernetesconfiguration/extensions <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|XnHandoverFailure |Yes |Xn Handover Failures |Count |Total |Handover failure rate (per minute) |3gppGen, PccpId, SiteId | |XnHandoverSuccess |Yes |Xn Handover Successes |Count |Total |Handover success rate (per minute) |3gppGen, PccpId, SiteId | - ## Microsoft.Kusto/clusters <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalNumberOfThrottledQueries |Yes |Total number of throttled queries |Count |Maximum |Total number of throttled queries |No Dimensions | |WeakConsistencyLatency |Yes |Weak consistency latency |Seconds |Average |The max latency between the previous metadata sync and the next one (in DB/node scope) |Database, RoleInstance | - ## Microsoft.Logic/IntegrationServiceEnvironments <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggersSucceeded |Yes |Triggers Succeeded |Count |Total |Number of workflow triggers succeeded. |No Dimensions | |TriggerSuccessLatency |Yes |Trigger Success Latency |Seconds |Average |Latency of succeeded workflow triggers. |No Dimensions | - ## Microsoft.Logic/Workflows <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TriggerSuccessLatency |Yes |Trigger Success Latency |Seconds |Average |Latency of succeeded workflow triggers. |No Dimensions | |TriggerThrottledEvents |Yes |Trigger Throttled Events |Count |Total |Number of workflow trigger throttled events. |No Dimensions | - ## Microsoft.MachineLearningServices/workspaces <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Unusable Nodes |Yes |Unusable Nodes |Count |Average |Number of unusable nodes. Unusable nodes are not functional due to some unresolvable issue. Azure will recycle these nodes. |Scenario, ClusterName | |Warnings |Yes |Warnings |Count |Total |Number of run warnings in this workspace. Count is updated whenever a run encounters a warning. |Scenario | - ## Microsoft.MachineLearningServices/workspaces/onlineEndpoints <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RequestLatency_P99 |Yes |Request Latency P99 |Milliseconds |Average |The average P99 request latency aggregated by all request latency values collected over the selected time period |deployment | |RequestsPerMinute |No |Requests Per Minute |Count |Average |The number of requests sent to online endpoint within a minute |deployment, statusCode, statusCodeClass, modelStatusCode | - ## Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|GpuMemoryUtilizationPercentage |Yes |GPU Memory Utilization Percentage |Percent |Average |Percentage of GPU memory utilization on an instance. Utilization is reported at one minute intervals. |instanceId | |GpuUtilizationPercentage |Yes |GPU Utilization Percentage |Percent |Average |Percentage of GPU utilization on an instance. Utilization is reported at one minute intervals. |instanceId | - ## Microsoft.ManagedNetworkFabric/networkDevices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PowerSupplyOutputPower |Yes |Power Supply Output Power |Unspecified |Average |Output power supplied by the power supply (watts) |FabricId, RegionName, ComponentName | |PowerSupplyOutputVoltage |Yes |Power Supply Output Voltage |Unspecified |Average |Output voltage supplied by the power supply (volts). |FabricId, RegionName, ComponentName | - ## Microsoft.Maps/accounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|CreatorUsage |No |Creator Usage |Bytes |Average |Azure Maps Creator usage statistics |ServiceName | |Usage |No |Usage |Count |Count |Count of API calls |ApiCategory, ApiName, ResultType, ResponseCode | - ## Microsoft.Media/mediaservices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|StreamingPolicyQuotaUsedPercentage |Yes |Streaming Policy quota used percentage |Percent |Average |Streaming Policy used percentage in current media service account |No Dimensions | |TransformQuota |Yes |Transform quota |Count |Average |The Transform quota for the current media service account. |No Dimensions | - ## Microsoft.Media/mediaservices/liveEvents <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|IngestLastTimestamp |Yes |Live Event ingest last timestamp |Milliseconds |Maximum |Last timestamp ingested for a live event. |TrackName | |LiveOutputLastTimestamp |Yes |Last output timestamp |Milliseconds |Maximum |Timestamp of the last fragment uploaded to storage for a live event output. |TrackName | - ## Microsoft.Media/mediaservices/streamingEndpoints <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Requests |Yes |Requests |Count |Total |Requests to a Streaming Endpoint. |OutputFormat, HttpStatusCode, ErrorCode | |SuccessE2ELatency |Yes |Success end to end Latency |MilliSeconds |Average |The average latency for successful requests in milliseconds. |OutputFormat | - ## Microsoft.Media/videoanalyzers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|IngressBytes |Yes |Ingress Bytes |Bytes |Total |The number of bytes ingressed by the pipeline node. |PipelineKind, PipelineTopology, Pipeline, Node | |Pipelines |Yes |Pipelines |Count |Total |The number of pipelines of each kind and state |PipelineKind, PipelineTopology, PipelineState | - ## Microsoft.MixedReality/remoteRenderingAccounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ActiveRenderingSessions |Yes |Active Rendering Sessions |Count |Average |Total number of active rendering sessions |SessionType, SDKVersion | |AssetsConverted |Yes |Assets Converted |Count |Total |Total number of assets converted |SDKVersion | - ## Microsoft.MixedReality/spatialAnchorsAccounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PosesFound |Yes |Poses Found |Count |Total |Number of Poses returned |DeviceFamily, SDKVersion | |TotalDailyAnchors |Yes |Total Daily Anchors |Count |Average |Total number of Anchors - Daily |DeviceFamily, SDKVersion | - ## Microsoft.Monitor/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|EventsPerMinuteIngestedPercentUtilization |No |Events Per Minute Ingested % Utilization |Percent |Average |The percentage of the current metric ingestion rate limit being utilized |StampColor | |SimpleSamplesStored |No |Simple Data Samples Stored |Count |Maximum |The total number of samples stored for simple sampling types (like sum, count). For Prometheus this is equivalent to the number of samples scraped and ingested. |StampColor | - ## Microsoft.NetApp/netAppAccounts/capacityPools <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VolumePoolTotalLogicalSize |Yes |Pool Consumed Size |Bytes |Average |Sum of the logical size of all the volumes belonging to the pool |No Dimensions | |VolumePoolTotalSnapshotSize |Yes |Total Snapshot size for the pool |Bytes |Average |Sum of snapshot size of all volumes in this pool |No Dimensions | - ## Microsoft.NetApp/netAppAccounts/capacityPools/volumes <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|XregionReplicationRelationshipTransferring |Yes |Is volume replication transferring |Count |Average |Whether the status of the Volume Replication is 'transferring'. |No Dimensions | |XregionReplicationTotalTransferBytes |Yes |Volume replication total transfer |Bytes |Average |Cumulative bytes transferred for the relationship. |No Dimensions | - ## Microsoft.Network/applicationgateways <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BackendFirstByteResponseTime |No |Backend First Byte Response Time |MilliSeconds |Average |Time interval between start of establishing a connection to backend server and receiving the first byte of the response header, approximating processing time of backend server |Listener, BackendServer, BackendPool, BackendHttpSetting | |BackendLastByteResponseTime |No |Backend Last Byte Response Time |MilliSeconds |Average |Time interval between start of establishing a connection to backend server and receiving the last byte of the response body |Listener, BackendServer, BackendPool, BackendHttpSetting | |BackendResponseStatus |Yes |Backend Response Status |Count |Total |The number of HTTP response codes generated by the backend members. This does not include any response codes generated by the Application Gateway. |BackendServer, BackendPool, BackendHttpSetting, HttpStatusGroup |
-|BackendTlsNegotiationError |Yes |Backend TLS Connection Errors |Count |Total |TLS Connection Errors for Application Gateway Backend |BackendHttpSetting, BackendPool, ErrorType |
|BlockedCount |Yes |Web Application Firewall Blocked Requests Rule Distribution |Count |Total |Web Application Firewall blocked requests rule distribution |RuleGroup, RuleId | |BytesReceived |Yes |Bytes Received |Bytes |Total |The total number of bytes received by the Application Gateway from the clients |Listener | |BytesSent |Yes |Bytes Sent |Bytes |Total |The total number of bytes sent by the Application Gateway to the clients |Listener | |CapacityUnits |No |Current Capacity Units |Count |Average |Capacity Units consumed |No Dimensions | |ClientRtt |No |Client RTT |MilliSeconds |Average |Average round trip time between clients and Application Gateway. This metric indicates how long it takes to establish connections and return acknowledgements |Listener | |ComputeUnits |No |Current Compute Units |Count |Average |Compute Units consumed |No Dimensions |
-|ConnectionLifetime |No |Connection Lifetime |MilliSeconds |Average |Average time duration from the start of a new connection to its termination |Listener |
|CpuUtilization |No |CPU Utilization |Percent |Average |Current CPU utilization of the Application Gateway |No Dimensions | |CurrentConnections |Yes |Current Connections |Count |Total |Count of current connections established with Application Gateway |No Dimensions | |EstimatedBilledCapacityUnits |No |Estimated Billed Capacity Units |Count |Average |Estimated capacity units that will be charged |No Dimensions | |FailedRequests |Yes |Failed Requests |Count |Total |Count of failed requests that Application Gateway has served |BackendSettingsPool | |FixedBillableCapacityUnits |No |Fixed Billable Capacity Units |Count |Average |Minimum capacity units that will be charged |No Dimensions |
-|GatewayUtilization |No |Gateway Utilization |Percent |Average |Denotes the current utilization status of the Application Gateway resource. The metric is an aggregate report of your gateway's running instances. As a recommendation, one should consider scaling out when the value exceeds 70%. However, the threshold could differ for different workloads. Hence, choose a limit that suits your requirements. |No Dimensions |
|HealthyHostCount |Yes |Healthy Host Count |Count |Average |Number of healthy backend hosts |BackendSettingsPool | |MatchedCount |Yes |Web Application Firewall Total Rule Distribution |Count |Total |Web Application Firewall Total Rule Distribution for the incoming traffic |RuleGroup, RuleId | |NewConnectionsPerSecond |No |New connections per second |CountPerSecond |Average |New connections per second established with Application Gateway |No Dimensions |
-|RejectedConnections |Yes |Rejected Connections |Count |Total |Count of rejected connections for Application Gateway Frontend |No Dimensions |
|ResponseStatus |Yes |Response Status |Count |Total |Http response status returned by Application Gateway |HttpStatusGroup | |Throughput |No |Throughput |BytesPerSecond |Average |Number of bytes per second the Application Gateway has served |No Dimensions | |TlsProtocol |Yes |Client TLS Protocol |Count |Total |The number of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol. |Listener, TlsProtocol | |TotalRequests |Yes |Total Requests |Count |Total |Count of successful requests that Application Gateway has served |BackendSettingsPool | |UnhealthyHostCount |Yes |Unhealthy Host Count |Count |Average |Number of unhealthy backend hosts |BackendSettingsPool | - ## Microsoft.Network/azureFirewalls <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SNATPortUtilization |Yes |SNAT port utilization |Percent |Average |Percentage of outbound SNAT ports currently in use |Protocol | |Throughput |No |Throughput |BitsPerSecond |Average |Throughput processed by this firewall |No Dimensions | - ## microsoft.network/bastionHosts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|usage_user |No |CPU Usage |Count |Average |CPU Usage stats. |cpu, host | |used |Yes |Memory Usage |Count |Average |Memory Usage stats. |host | - ## Microsoft.Network/connections <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsInPerSecond |Yes |BitsInPerSecond |BitsPerSecond |Average |Bits ingressing Azure per second |No Dimensions | |BitsOutPerSecond |Yes |BitsOutPerSecond |BitsPerSecond |Average |Bits egressing Azure per second |No Dimensions | - ## Microsoft.Network/dnsForwardingRulesets <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ForwardingRuleCount |No |Forwarding Rule Count |Count |Maximum |This metric indicates the number of forwarding rules present in each DNS forwarding ruleset. |No Dimensions | |VirtualNetworkLinkCount |No |Virtual Network Link Count |Count |Maximum |This metric indicates the number of associated virtual network links to a DNS forwarding ruleset. |No Dimensions | - ## Microsoft.Network/dnsResolvers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|OutboundEndpointCount |No |Outbound Endpoint Count |Count |Maximum |This metric indicates the number of outbound endpoints created for a DNS Resolver. |No Dimensions | |QPS |No |Queries Per Second |Count |Average |This metric indicates the queries per second for a DNS Resolver. (Can be aggregated per EndpointId) |EndpointId | - ## Microsoft.Network/dnszones <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RecordSetCapacityUtilization |No |Record Set Capacity Utilization |Percent |Maximum |Percent of Record Set capacity utilized by a DNS zone |No Dimensions | |RecordSetCount |No |Record Set Count |Count |Maximum |Number of Record Sets in a DNS zone |No Dimensions | - ## Microsoft.Network/expressRouteCircuits <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|QosDropBitsInPerSecond |Yes |DroppedInBitsPerSecond |BitsPerSecond |Average |Ingress bits of data dropped per second |No Dimensions | |QosDropBitsOutPerSecond |Yes |DroppedOutBitsPerSecond |BitsPerSecond |Average |Egress bits of data dropped per second |No Dimensions | - ## Microsoft.Network/expressRouteCircuits/peerings <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BitsInPerSecond |Yes |BitsInPerSecond |BitsPerSecond |Average |Bits ingressing Azure per second |No Dimensions | |BitsOutPerSecond |Yes |BitsOutPerSecond |BitsPerSecond |Average |Bits egressing Azure per second |No Dimensions | - ## microsoft.network/expressroutegateways <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |ErGatewayConnectionBitsInPerSecond |No |Bits In Per Second |BitsPerSecond |Average |Bits per second ingressing Azure via ExpressRoute Gateway which can be further split for specific connections |ConnectionName | |ErGatewayConnectionBitsOutPerSecond |No |Bits Out Per Second |BitsPerSecond |Average |Bits per second egressing Azure via ExpressRoute Gateway which can be further split for specific connections |ConnectionName |
-|ExpressRouteGatewayActiveFlows |Yes |Active Flows |Count |Maximum |Number of Active Flows on ExpressRoute Gateway |roleInstance |
+|ExpressRouteGatewayActiveFlows |No |Active Flows |Count |Average |Number of Active Flows on ExpressRoute Gateway |roleInstance |
|ExpressRouteGatewayBitsPerSecond |No |Bits Received Per second |BitsPerSecond |Average |Total Bits received on ExpressRoute Gateway per second |roleInstance | |ExpressRouteGatewayCountOfRoutesAdvertisedToPeer |Yes |Count Of Routes Advertised to Peer |Count |Maximum |Count Of Routes Advertised To Peer by ExpressRoute Gateway |roleInstance | |ExpressRouteGatewayCountOfRoutesLearnedFromPeer |Yes |Count Of Routes Learned from Peer |Count |Maximum |Count Of Routes Learned From Peer by ExpressRoute Gateway |roleInstance |
This latest update adds a new column and reorders the metrics to be alphabetical
|ExpressRouteGatewayNumberOfVmInVnet |No |Number of VMs in the Virtual Network |Count |Maximum |Number of VMs in the Virtual Network |No Dimensions | |ExpressRouteGatewayPacketsPerSecond |No |Packets received per second |CountPerSecond |Average |Total Packets received on ExpressRoute Gateway per second |roleInstance | - ## Microsoft.Network/expressRoutePorts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RxLightLevel |Yes |RxLightLevel |Count |Average |Rx Light level in dBm |Link, Lane | |TxLightLevel |Yes |TxLightLevel |Count |Average |Tx light level in dBm |Link, Lane | - ## Microsoft.Network/frontdoors <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalLatency |Yes |Total Latency |MilliSeconds |Average |The time calculated from when the client request was received by the HTTP/S proxy until the client acknowledged the last response byte from the HTTP/S proxy |HttpStatus, HttpStatusGroup, ClientRegion, ClientCountry | |WebApplicationFirewallRequestCount |Yes |Web Application Firewall Request Count |Count |Total |The number of client requests processed by the Web Application Firewall |PolicyName, RuleName, Action | - ## Microsoft.Network/loadBalancers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UsedSnatPorts |No |Used SNAT Ports |Count |Average |Total number of SNAT ports used within time period |FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval | |VipAvailability |Yes |Data Path Availability |Count |Average |Average Load Balancer data path availability per time duration |FrontendIPAddress, FrontendPort | - ## Microsoft.Network/natGateways <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SNATConnectionCount |No |SNAT Connection Count |Count |Total |Total concurrent active connections |Protocol, ConnectionState | |TotalConnectionCount |No |Total SNAT Connection Count |Count |Total |Total number of active SNAT connections |Protocol | - ## Microsoft.Network/networkInterfaces <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PacketsReceivedRate |Yes |Packets Received |Count |Total |Number of packets the Network Interface received |No Dimensions | |PacketsSentRate |Yes |Packets Sent |Count |Total |Number of packets the Network Interface sent |No Dimensions | - ## Microsoft.Network/networkWatchers/connectionMonitors <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|RoundTripTimeMs |Yes |Round-Trip Time (ms) |MilliSeconds |Average |Round-trip time in milliseconds for the connectivity monitoring checks |SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet | |TestResult |Yes |Test Result |Count |Average |Connection monitor test result |SourceAddress, SourceName, SourceResourceId, SourceType, Protocol, DestinationAddress, DestinationName, DestinationResourceId, DestinationType, DestinationPort, TestGroupName, TestConfigurationName, TestResultCriterion, SourceIP, DestinationIP, SourceSubnet, DestinationSubnet | - ## microsoft.network/p2svpngateways <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|P2SConnectionCount |Yes |P2S Connection Count |Count |Total |Point-to-site connection count of a gateway |Protocol, Instance | |UserVpnRouteCount |No |User Vpn Route Count |Count |Total |Count of P2S User Vpn routes learned by gateway |RouteType, Instance | - ## Microsoft.Network/privateDnsZones <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|VirtualNetworkWithRegistrationCapacityUtilization |No |Virtual Network Registration Link Capacity Utilization |Percent |Maximum |Percent of Virtual Network Link with auto-registration capacity utilized by a Private DNS zone |No Dimensions | |VirtualNetworkWithRegistrationLinkCount |No |Virtual Network Registration Link Count |Count |Maximum |Number of Virtual Networks linked to a Private DNS zone with auto-registration enabled |No Dimensions | - ## Microsoft.Network/privateEndpoints <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PEBytesIn |Yes |Bytes In |Count |Total |Total number of Bytes Out |No Dimensions | |PEBytesOut |Yes |Bytes Out |Count |Total |Total number of Bytes Out |No Dimensions | - ## Microsoft.Network/privateLinkServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PLSBytesOut |Yes |Bytes Out |Count |Total |Total number of Bytes Out |PrivateLinkServiceId | |PLSNatPortsUsage |Yes |Nat Ports Usage |Percent |Average |Nat Ports Usage |PrivateLinkServiceId, PrivateLinkServiceIPAddress | - ## Microsoft.Network/publicIPAddresses <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UDPPacketsInDDoS |Yes |Inbound UDP packets DDoS |CountPerSecond |Maximum |Inbound UDP packets DDoS |No Dimensions | |VipAvailability |Yes |Data Path Availability |Count |Average |Average IP Address availability per time duration |Port | - ## Microsoft.Network/trafficManagerProfiles <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ProbeAgentCurrentEndpointStateByProfileResourceId |Yes |Endpoint Status by Endpoint |Count |Maximum |1 if an endpoint's probe status is "Enabled", 0 otherwise. |EndpointName | |QpsByEndpoint |Yes |Queries by Endpoint Returned |Count |Total |Number of times a Traffic Manager endpoint was returned in the given time frame |EndpointName | - ## Microsoft.Network/virtualHubs <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|CountOfRoutesLearnedFromPeer |No |Count Of Routes Learned From Peer |Count |Maximum |Total number of routes learned from peer |routeserviceinstance, bgppeerip, bgppeertype | |VirtualHubDataProcessed |No |Data Processed by the Virtual Hub Router |Bytes |Total |Data Processed by the Virtual Hub Router |No Dimensions | - ## microsoft.network/virtualnetworkgateways <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BgpPeerStatus |No |BGP Peer Status |Count |Average |Status of BGP peer |BgpPeerAddress, Instance | |BgpRoutesAdvertised |Yes |BGP Routes Advertised |Count |Total |Count of Bgp Routes Advertised through tunnel |BgpPeerAddress, Instance | |BgpRoutesLearned |Yes |BGP Routes Learned |Count |Total |Count of Bgp Routes Learned through tunnel |BgpPeerAddress, Instance |
+|ExpressRouteGatewayActiveFlows |No |Active Flows |Count |Average |Number of Active Flows on ExpressRoute Gateway |roleInstance |
|ExpressRouteGatewayBitsPerSecond |No |Bits Received Per second |BitsPerSecond |Average |Total Bits received on ExpressRoute Gateway per second |roleInstance | |ExpressRouteGatewayCountOfRoutesAdvertisedToPeer |Yes |Count Of Routes Advertised to Peer |Count |Maximum |Count Of Routes Advertised To Peer by ExpressRoute Gateway |roleInstance | |ExpressRouteGatewayCountOfRoutesLearnedFromPeer |Yes |Count Of Routes Learned from Peer |Count |Maximum |Count Of Routes Learned From Peer by ExpressRoute Gateway |roleInstance | |ExpressRouteGatewayCpuUtilization |Yes |CPU utilization |Percent |Average |CPU Utilization of the ExpressRoute Gateway |roleInstance | |ExpressRouteGatewayFrequencyOfRoutesChanged |No |Frequency of Routes change |Count |Total |Frequency of Routes change in ExpressRoute Gateway |roleInstance |
+|ExpressRouteGatewayMaxFlowsCreationRate |No |Max Flows Created Per Second |CountPerSecond |Maximum |Maximum Number of Flows Created Per Second on ExpressRoute Gateway |roleInstance, direction |
|ExpressRouteGatewayNumberOfVmInVnet |No |Number of VMs in the Virtual Network |Count |Maximum |Number of VMs in the Virtual Network |roleInstance | |ExpressRouteGatewayPacketsPerSecond |No |Packets received per second |CountPerSecond |Average |Total Packets received on ExpressRoute Gateway per second |roleInstance | |MmsaCount |Yes |Tunnel MMSA Count |Count |Total |MMSA Count |ConnectionName, RemoteIP, Instance |
This latest update adds a new column and reorders the metrics to be alphabetical
|UserVpnRouteCount |No |User Vpn Route Count |Count |Total |Count of P2S User Vpn routes learned by gateway |RouteType, Instance | |VnetAddressPrefixCount |Yes |VNet Address Prefix Count |Count |Total |Count of Vnet address prefixes behind gateway |Instance | - ## Microsoft.Network/virtualNetworks <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UDPPacketsForwardedDDoS |Yes |Inbound UDP packets forwarded DDoS |CountPerSecond |Maximum |Inbound UDP packets forwarded DDoS |ProtectedIPAddress | |UDPPacketsInDDoS |Yes |Inbound UDP packets DDoS |CountPerSecond |Maximum |Inbound UDP packets DDoS |ProtectedIPAddress | - ## Microsoft.Network/virtualRouters <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |PeeringAvailability |Yes |Bgp Availability |Percent |Average |BGP Availability between VirtualRouter and remote peers |Peer | - ## microsoft.network/vpngateways <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TunnelTotalFlowCount |Yes |Tunnel Total Flow Count |Count |Total |Total flow count on a tunnel |ConnectionName, RemoteIP, Instance | |VnetAddressPrefixCount |Yes |VNet Address Prefix Count |Count |Total |Count of Vnet address prefixes behind gateway |Instance |
+## Microsoft.NetworkAnalytics/DataConnectors
+<!-- Data source : naam-->
+
+|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
+||||||||
+|DataIngested |No |Data Ingested |Bytes |Total |The volume of data ingested by the pipeline (bytes). |No Dimensions |
+|MalformedData |Yes |Malformed Data |Count |Total |The number of files unable to be processed by the pipeline. |No Dimensions |
+|ProcessedFileCount |Yes |Processed File Count |Count |Total |The number of files processed by the data connector. |No Dimensions |
+|Running |Yes |Running |Unspecified |Count |Values greater than 0 indicate that the pipeline is ready to process data. |No Dimensions |
## Microsoft.NetworkFunction/azureTrafficCollectors <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|usage_active |Yes |CPU Usage |Percent |Average |CPU Usage Percentage. |Hostname | |used_percent |Yes |Memory Usage |Percent |Average |Memory Usage Percentage. |Hostname | - ## Microsoft.NotificationHubs/Namespaces/NotificationHubs <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|registration.update |Yes |Registration Update Operations |Count |Total |The count of all successful registration updates. |No Dimensions | |scheduled.pending |Yes |Pending Scheduled Notifications |Count |Total |Pending Scheduled Notifications |No Dimensions | - ## Microsoft.OperationalInsights/workspaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Query Success Rate |No |Query Success Rate |Percent |Average |User query success rate for this workspace. |IsUserQuery | |Update |Yes |Update |Count |Average |Update. Supported for: Windows. Part of [metric alerts for logs feature](https://aka.ms/am-log-to-metric). |Computer, Product, Classification, UpdateState, Optional, Approved | - ## Microsoft.Orbital/contactProfiles <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ContactFailure |Yes |Contact Failure Count |Count |Count |Denotes the number of failed Contacts for a specific Contact Profile |No Dimensions | |ContactSuccess |Yes |Contact Success Count |Count |Count |Denotes the number of successful Contacts for a specific Contact Profile |No Dimensions | - ## Microsoft.Orbital/l2Connections <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|OutUcastPktCount |Yes |Out Unicast Packet Count |Count |Average |Egress Unicast Packet Count for the L2 connection |No Dimensions | |OutUCastPktsPerVLAN |Yes |Out Unicast Packet Count Per Vlan |Count |Average |Egress Subinterface Unicast Packet Count for the L2 connection |VLANID | - ## Microsoft.Orbital/spacecrafts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ContactFailure |Yes |Contact Failure Count |Count |Count |Denotes the number of failed Contacts for a specific Spacecraft |No Dimensions | |ContactSuccess |Yes |Contact Success Count |Count |Count |Denotes the number of successful Contacts for a specific Spacecraft |No Dimensions | - ## Microsoft.Peering/peerings <!-- Data source : arm--> |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
+|AverageCustomerPrefixLatency |Yes |Average Customer Prefix Latency |Milliseconds |Average |Average of median Customer prefix latency |RegisteredAsnName |
|EgressTrafficRate |Yes |Egress Traffic Rate |BitsPerSecond |Average |Egress traffic rate in bits per second |ConnectionId, SessionIp, TrafficClass | |FlapCounts |Yes |Connection Flap Events Count |Count |Sum |Flap Events Count in all the connection |ConnectionId, SessionIp | |IngressTrafficRate |Yes |Ingress Traffic Rate |BitsPerSecond |Average |Ingress traffic rate in bits per second |ConnectionId, SessionIp, TrafficClass | |PacketDropRate |Yes |Packets Drop Rate |BitsPerSecond |Average |Packets Drop rate in bits per second |ConnectionId, SessionIp, TrafficClass |
+|RegisteredPrefixLatency |Yes |Prefix Latency |Milliseconds |Average |Median prefix latency |RegisteredPrefixName |
|SessionAvailability |Yes |Session Availability |Count |Average |Availability of the peering session |ConnectionId, SessionIp | - ## Microsoft.Peering/peeringServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |RoundTripTime |Yes |Round Trip Time |Milliseconds |Average |Average round trip time |ConnectionMonitorTestName | - ## Microsoft.PlayFab/titles <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |PlayerLoggedInCount |Yes |PlayerLoggedInCount |Count |Count |Number of logins by any player in a given title |TitleId | - ## Microsoft.PowerBIDedicated/capacities <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|cpu_metric |Yes |CPU (Gen2) |Percent |Average |CPU Utilization. Supported only for Power BI Embedded Generation 2 resources. |No Dimensions | |overload_metric |Yes |Overload (Gen2) |Count |Average |Resource Overload, 1 if resource is overloaded, otherwise 0. Supported only for Power BI Embedded Generation 2 resources. |No Dimensions | - ## microsoft.purview/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ScanFailed |Yes |Scan Failed |Count |Total |Indicates the number of scans failed. |No Dimensions | |ScanTimeTaken |Yes |Scan time taken |Seconds |Total |Indicates the total scan time in seconds. |No Dimensions | - ## Microsoft.RecoveryServices/Vaults <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BackupHealthEvent |Yes |Backup Health Events (preview) |Count |Count |The count of health events pertaining to backup job health |dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName | |RestoreHealthEvent |Yes |Restore Health Events (preview) |Count |Count |The count of health events pertaining to restore job health |dataSourceURL, backupInstanceUrl, dataSourceType, healthStatus, backupInstanceName | - ## Microsoft.Relay/namespaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SenderConnections-TotalRequests |No |SenderConnections-TotalRequests |Count |Total |Total SenderConnections requests for Microsoft.Relay. |EntityName | |SenderDisconnects |No |SenderDisconnects |Count |Total |Total SenderDisconnects for Microsoft.Relay. |EntityName | - ## microsoft.resources/subscriptions <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Latency |No |Latency |Seconds |Average |Latency data for all requests to Azure Resource Manager |IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId | |Traffic |No |Traffic |Count |Count |Traffic data for all requests to Azure Resource Manager |IsCustomerOriginated, Method, Namespace, RequestRegion, ResourceType, StatusCode, StatusCodeClass, Microsoft.SubscriptionId | - ## Microsoft.Search/searchServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SkillExecutionCount |Yes |Skill execution invocation count |Count |Total |Number of skill executions |DataSourceName, Failed, IndexerName, SkillName, SkillsetName, SkillType | |ThrottledSearchQueriesPercentage |Yes |Throttled search queries percentage |Percent |Average |Percentage of search queries that were throttled for the search service |No Dimensions | - ## microsoft.securitydetonation/chambers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SubmissionsOutstanding |No |Outstanding Submissions |Count |Average |The average number of outstanding submissions that are queued for processing. |Region | |SubmissionsSucceeded |No |Successful Submissions / Hr |Count |Maximum |The number of successful submissions / Hr. |Region | - ## Microsoft.SecurityDetonation/SecurityDetonationChambers <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |% Processor Time |Yes |% CPU |Percent |Average |Percent CPU utilization |No Dimensions | - ## Microsoft.ServiceBus/Namespaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UserErrors |No |User Errors. |Count |Total |User Errors for Microsoft.ServiceBus. |EntityName, OperationResult | |WSXNS |No |Memory Usage (Deprecated) |Percent |Maximum |Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead. |Replica | - ## Microsoft.SignalRService/SignalR <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SystemErrors |Yes |System Errors |Percent |Maximum |The percentage of system errors |No Dimensions | |UserErrors |Yes |User Errors |Percent |Maximum |The percentage of user errors |No Dimensions | - ## Microsoft.SignalRService/WebPubSub <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ServerLoad |No |Server Load |Percent |Maximum |SignalR server load. |No Dimensions | |TotalConnectionCount |Yes |Connection Count |Count |Maximum |The number of user connections established to the service. It is aggregated by adding all the online connections. |No Dimensions | - ## microsoft.singularity/accounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |GpuUtilizationPercentage |Yes |GpuUtilizationPercentage |Percent |Average |GPU utilization percentage |accountname, ClusterName, Environment, instance, jobContainerId, jobInstanceId, jobname, Region | - ## Microsoft.Sql/managedInstances <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|storage_space_used_mb |Yes |Storage space used |Count |Average |Storage space used |No Dimensions | |virtual_core_count |Yes |Virtual core count |Count |Average |Virtual core count |No Dimensions | - ## Microsoft.Sql/servers/databases <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|workers_percent |Yes |Workers percentage |Percent |Average |Workers percentage. Not applicable to data warehouses. |No Dimensions | |xtp_storage_percent |Yes |In-Memory OLTP storage percent |Percent |Average |In-Memory OLTP storage percent. Not applicable to data warehouses. |No Dimensions | - ## Microsoft.Sql/servers/elasticpools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|workers_percent |Yes |Workers percentage |Percent |Average |Workers percentage |No Dimensions | |xtp_storage_percent |Yes |In-Memory OLTP storage percent |Percent |Average |In-Memory OLTP storage percent. Not applicable to hyperscale |No Dimensions | - ## Microsoft.Storage/storageAccounts <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, TransactionType | |UsedCapacity |No |Used capacity |Bytes |Average |The amount of storage used by the storage account. For standard storage accounts, it's the sum of capacity used by blob, table, file, and queue. For premium storage accounts and Blob storage accounts, it is the same as BlobCapacity or FileCapacity. |No Dimensions | - ## Microsoft.Storage/storageAccounts/blobServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |MilliSeconds |Average |The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, TransactionType | - ## Microsoft.Storage/storageAccounts/fileServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |MilliSeconds |Average |The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication, FileShare | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, FileShare, TransactionType | - ## Microsoft.Storage/storageAccounts/objectReplicationPolicies <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|PendingBytesForReplication |No |Pending Bytes for Replication (PREVIEW) |Bytes |Average |The size in bytes of the blob object pending for replication, please note, this metric is in preview and is subject to change before becoming generally available |TimeBucket | |PendingOperationsForReplication |No |Pending Operations for Replication (PREVIEW) |Count |Average |The count of pending operations for replication, please note, this metric is in preview and is subject to change before becoming generally available |TimeBucket | - ## Microsoft.Storage/storageAccounts/queueServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SuccessServerLatency |Yes |Success Server Latency |MilliSeconds |Average |The average time used to process a successful request by Azure Storage. This value does not include the network latency specified in SuccessE2ELatency. |GeoType, ApiName, Authentication | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, TransactionType | - ## Microsoft.Storage/storageAccounts/storageTasks <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ObjectsOperationFailedCount |Yes |Objects failed count |Count |Total |The number of objects failed in storage task |AccountName, TaskAssignmentId | |ObjectsTargetedCount |Yes |Objects targed count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId | - ## Microsoft.Storage/storageAccounts/tableServices <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TableEntityCount |Yes |Table Entity Count |Count |Average |The number of table entities in the storage account. |No Dimensions | |Transactions |Yes |Transactions |Count |Total |The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests which produced errors. Use ResponseType dimension for the number of different type of response. |ResponseType, GeoType, ApiName, Authentication, TransactionType | - ## Microsoft.Storage/storageTasks <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ObjectsOperationFailedCount |Yes |Objects failed count |Count |Total |The number of objects failed in storage task |AccountName, TaskAssignmentId | |ObjectsTargetedCount |Yes |Objects targed count |Count |Total |The number of objects targeted in storage task |AccountName, TaskAssignmentId | - ## Microsoft.StorageCache/amlFilesystems <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|OSTFilesTotal |No |OST Files Total |Count |Average |Total number of files supported on the OST. |ostnum | |OSTFilesUsed |No |OST Files Used |Count |Average |Number of total supported files minus the number of free files on the OST. |ostnum | - ## Microsoft.StorageCache/caches <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalUsedWriteSpace |Yes |Used Write Space |Bytes |Average |Total write space used by changed files for the HPC Cache. |No Dimensions | |Uptime |Yes |Uptime |Count |Average |Boolean results of connectivity test between the Cache and monitoring system. |No Dimensions | - ## Microsoft.StorageMover/storageMovers <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|JobRunTransferThroughputBytes |Yes |Job Run Transfer Throughput Bytes |BytesPerSecond |Average |Job Run transfer throughput in bytes/sec |JobRunName | |JobRunTransferThroughputItems |Yes |Job Run Transfer Throughput Items |CountPerSecond |Average |Job Run transfer throughput in items/sec |JobRunName | - ## Microsoft.StorageSync/storageSyncServices <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|StorageSyncTieredDataSizeBytes |Yes |Cloud tiering size of data tiered |Bytes |Average |Size of data tiered to Azure file share |SyncGroupName, ServerName, ServerEndpointName | |StorageSyncTieringCacheSizeBytes |Yes |Server cache size |Bytes |Average |Size of data cached on the server |SyncGroupName, ServerName, ServerEndpointName | - ## Microsoft.StreamAnalytics/streamingjobs <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ProcessCPUUsagePercentage |Yes |CPU % Utilization |Percent |Maximum |CPU % Utilization |LogicalName, PartitionId, ProcessorInstance, NodeName | |ResourceUtilization |Yes |SU (Memory) % Utilization |Percent |Maximum |SU (Memory) % Utilization |LogicalName, PartitionId, ProcessorInstance, NodeName | - ## Microsoft.Synapse/workspaces <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SQLStreamingOutOfOrderEvents |No |Out of order events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of Event Hub Events (serialized messages) received by the Event Hub Input Adapter, received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingOutputEvents |No |Output events (preview) |Count |Total |This is a preview metric available in East US, West Europe. Number of output events. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingOutputWatermarkDelaySeconds |No |Watermark delay (preview) |Count |Maximum |This is a preview metric available in East US, West Europe. Output watermark delay in seconds. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance |
-|SQLStreamingResourceUtilization |No |Resource % utilization (preview) |Percent |Maximum |This is a preview metric available in East US, West Europe.
+|SQLStreamingResourceUtilization |No |Resource % utilization (preview) |Percent |Maximum |This is a preview metric available in East US, West Europe.
Resource utilization expressed as a percentage. High utilization indicates that the job is using close to the maximum allocated resources. |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | |SQLStreamingRuntimeErrors |No |Runtime errors (preview) |Count |Total |This is a preview metric available in East US, West Europe. Total number of errors related to query processing (excluding errors found while ingesting events or outputting results). |SQLPoolName, SQLDatabaseName, JobName, LogicalName, PartitionId, ProcessorInstance | - ## Microsoft.Synapse/workspaces/bigDataPools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|BigDataPoolApplicationsActive |No |Active Apache Spark applications |Count |Maximum |Total Active Apache Spark Pool Applications |JobState | |BigDataPoolApplicationsEnded |No |Ended Apache Spark applications |Count |Total |Count of Apache Spark pool applications ended |JobType, JobResult | - ## Microsoft.Synapse/workspaces/scopePools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|ScopePoolJobQueuedDurationMetric |Yes |Queued duration of SCOPE job |Milliseconds |Average |Queued duration (Milliseconds) used by each SCOPE job |JobType | |ScopePoolJobRunningDurationMetric |Yes |Running duration of SCOPE job |Milliseconds |Average |Running duration (Milliseconds) used by each SCOPE job |JobType, JobResult | - ## Microsoft.Synapse/workspaces/sqlPools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|WLGEffectiveMinResourcePercent |No |Effective min resource percent |Percent |Maximum |The effective min resource percentage setting allowed considering the service level and the workload group settings. The effective min_percentage_resource can be adjusted higher on lower service levels |IsUserDefined, WorkloadGroup | |WLGQueuedQueries |No |Workload group queued queries |Count |Total |Cumulative count of requests queued after the max concurrency limit was reached |IsUserDefined, WorkloadGroup | - ## Microsoft.TimeSeriesInsights/environments <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|WarmStorageMaxProperties |Yes |Warm Storage Max Properties |Count |Maximum |Maximum number of properties used allowed by the environment for S1/S2 SKU and maximum number of properties allowed by Warm Store for PAYG SKU |No Dimensions | |WarmStorageUsedProperties |Yes |Warm Storage Used Properties |Count |Maximum |Number of properties used by the environment for S1/S2 SKU and number of properties used by Warm Store for PAYG SKU |No Dimensions | - ## Microsoft.TimeSeriesInsights/environments/eventsources <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|WarmStorageMaxProperties |Yes |Warm Storage Max Properties |Count |Maximum |Maximum number of properties used allowed by the environment for S1/S2 SKU and maximum number of properties allowed by Warm Store for PAYG SKU |No Dimensions | |WarmStorageUsedProperties |Yes |Warm Storage Used Properties |Count |Maximum |Number of properties used by the environment for S1/S2 SKU and number of properties used by Warm Store for PAYG SKU |No Dimensions | - ## Microsoft.Web/containerapps <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|UsageNanoCores |Yes |CPU Usage Nanocores |NanoCores |Average |CPU consumed by the container app, in nano cores. 1,000,000,000 nano cores = 1 core |revisionName, podName | |WorkingSetBytes |Yes |Memory Working Set Bytes |Bytes |Average |Container App working set memory used in bytes. |revisionName, podName | - ## Microsoft.Web/hostingEnvironments <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SmallAppServicePlanInstances |Yes |Small App Service Plan Workers |Count |Average |Number of small App Service Plan worker instances |No Dimensions | |TotalFrontEnds |Yes |Total Front Ends |Count |Average |Number of front end instances |No Dimensions | - ## Microsoft.Web/hostingenvironments/multirolepools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|SmallAppServicePlanInstances |Yes |Small App Service Plan Workers |Count |Average |Small App Service Plan Workers |No Dimensions | |TotalFrontEnds |Yes |Total Front Ends |Count |Average |Total Front Ends |No Dimensions | - ## Microsoft.Web/hostingenvironments/workerpools <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|WorkersTotal |Yes |Total Workers |Count |Average |Total Workers |No Dimensions | |WorkersUsed |Yes |Used Workers |Count |Average |Used Workers |No Dimensions | - ## Microsoft.Web/serverfarms <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TcpSynSent |Yes |TCP Syn Sent |Count |Average |The average number of sockets in SYN_SENT state across all the instances of the plan. |Instance | |TcpTimeWait |Yes |TCP Time Wait |Count |Average |The average number of sockets in TIME_WAIT state across all the instances of the plan. |Instance | - ## Microsoft.Web/sites <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|WorkflowRunsStarted |Yes |Workflow Runs Started Count |Count |Total |Workflow Runs Started Count. For LogicApps only. |workflowName | |WorkflowTriggersCompleted |Yes |Workflow Triggers Completed Count |Count |Total |Workflow Triggers Completed Count. For LogicApps only. |workflowName, status | - ## Microsoft.Web/sites/slots <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalAppDomains |Yes |Total App Domains |Count |Average |The current number of AppDomains loaded in this application. |Instance | |TotalAppDomainsUnloaded |Yes |Total App Domains Unloaded |Count |Average |The total number of AppDomains unloaded since the start of the application. |Instance | - ## NGINX.NGINXPLUS/nginxDeployments <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |nginx |Yes |nginx |Count |Total |The NGINX metric. |No Dimensions | - ## Wandisco.Fusion/migrators <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|TotalMigratedDataInBytes |Yes |Total Migrated Data in Bytes |Bytes |Total |This provides a view of the successfully migrated Bytes for a given migrator |No Dimensions | |TotalTransactions |Yes |Total Transactions |Count |Total |This provides a running total of the Data Transactions for which the user could be billed. |No Dimensions | - ## Wandisco.Fusion/migrators/liveDataMigrations <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|NumberOfFailedPaths |Yes |Number of Failed Paths |Count |Total |A count of which paths have failed to migrate. |No Dimensions | |TotalBytesTransferred |Yes |Total Bytes Transferred |Bytes |Total |This metric covers how many bytes have been transferred (does not reflect how many have successfully migrated, only how much has been transferred). |No Dimensions | - ## Wandisco.Fusion/migrators/metadataMigrations <!-- Data source : naam-->
This latest update adds a new column and reorders the metrics to be alphabetical
|LiveHiveInitiallyDiscoveredItems |Yes |Initially Discovered Hive Items |Count |Total |This provides the view of the total items discovered as a result of the initial scan of the On-Premises file system. Any items that are discovered after the initial scan, are NOT included in this metric. |No Dimensions | |LiveHiveInitiallyMigratedItems |Yes |Initially Migrated Hive Items |Count |Total |This provides the view of the total items migrated as a result of the initial scan of the On-Premises file system. Any items that are added after the initial scan, are NOT included in this metric. |No Dimensions | |LiveHiveMigratedItems |Yes |Migrated Hive Items |Count |Total |Provides a running total of how many items have been migrated. |No Dimensions |++
+## Next steps
+
+- [Read about metrics in Azure Monitor](../data-platform.md)
+- [Create alerts on metrics](../alerts/alerts-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-## Next steps
--- [Read about metrics in Azure Monitor](../data-platform.md)-- [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)--
-<!--Gen Date: Sun Apr 02 2023 09:56:30 GMT+0300 (Israel Daylight Time)-->
+<!--Gen Date: Thu Apr 13 2023 22:24:40 GMT+0300 (Israel Daylight Time)-->
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
- Title: Supported categories for Azure Monitor resource logs
-description: Understand the supported services and event schemas for Azure Monitor resource logs.
-- Previously updated : 04/02/2023-----
-# Supported categories for Azure Monitor resource logs
-
-> [!NOTE]
-> This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-
-[Azure Monitor resource logs](../essentials/platform-logs-overview.md) are logs emitted by Azure services that describe the operation of those services or resources. All resource logs available through Azure Monitor share a common top-level schema. Each service has the flexibility to emit unique properties for its own events.
-
-Resource logs were previously known as diagnostic logs. The name was changed in October 2019 as the types of logs gathered by Azure Monitor shifted to include more than just the Azure resource.
-
-A combination of the resource type (available in the `resourceId` property) and the category uniquely identifies a schema. There's a common schema for all resource logs with service-specific fields then added for different log categories. For more information, see [Common and service-specific schema for Azure resource logs](./resource-logs-schema.md).
-
-## Costs
-
-[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/product-categories/storage/), [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor (for example, [Datadog](../../partner-solutions/datadog/overview.md)) have costs associated with ingesting data and storing data. Check the pricing pages linked in the previous sentence to understand the costs for those services. Resource logs are just one type of data that you can send to those locations.
-
-In addition, there might be costs to export some categories of resource logs to those locations. Logs with possible export costs are listed in the table in the next section. For more information on export pricing, see the **Platform Logs** section on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Supported log categories per resource type
-
-Following is a list of the types of logs available for each resource type.
-
-Some categories might be supported only for specific types of resources. See the resource-specific documentation if you feel you're missing a resource. For example, Microsoft.Sql/servers/databases categories aren't available for all types of databases. For more information, see [information on SQL Database diagnostic logging](/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure).
-
-If you think something is missing, you can open a GitHub comment at the bottom of this article.
--
-## Microsoft.AAD/DomainServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AccountLogon |AccountLogon |No |
-|AccountManagement |AccountManagement |No |
-|DetailTracking |DetailTracking |No |
-|DirectoryServiceAccess |DirectoryServiceAccess |No |
-|LogonLogoff |LogonLogoff |No |
-|ObjectAccess |ObjectAccess |No |
-|PolicyChange |PolicyChange |No |
-|PrivilegeUse |PrivilegeUse |No |
-|SystemSecurity |SystemSecurity |No |
--
-## microsoft.aadiam/tenants
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Signin |Signin |Yes |
--
-## Microsoft.AgFoodPlatform/farmBeats
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationAuditLogs |Application Audit Logs |Yes |
-|FarmManagementLogs |Farm Management Logs |Yes |
-|FarmOperationLogs |Farm Operation Logs |Yes |
-|InsightLogs |Insight Logs |Yes |
-|JobProcessedLogs |Job Processed Logs |Yes |
-|ModelInferenceLogs |Model Inference Logs |Yes |
-|ProviderAuthLogs |Provider Auth Logs |Yes |
-|SatelliteLogs |Satellite Logs |Yes |
-|SensorManagementLogs |Sensor Management Logs |Yes |
-|WeatherLogs |Weather Logs |Yes |
--
-## Microsoft.AnalysisServices/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
-|Service |Service |No |
--
-## Microsoft.ApiManagement/service
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayLogs |Logs related to ApiManagement Gateway |No |
-|WebSocketConnectionLogs |Logs related to Websocket Connections |Yes |
--
-## Microsoft.App/managedEnvironments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppEnvSpringAppConsoleLogs |Spring App console logs |Yes |
-|ContainerAppConsoleLogs |Container App console logs |Yes |
-|ContainerAppSystemLogs |Container App system logs |Yes |
--
-## Microsoft.AppConfiguration/configurationStores
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|HttpRequest |HTTP Requests |Yes |
--
-## Microsoft.AppPlatform/Spring
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationConsole |Application Console |No |
-|BuildLogs |Build Logs |Yes |
-|ContainerEventLogs |Container Event Logs |Yes |
-|IngressLogs |Ingress Logs |Yes |
-|SystemLogs |System Logs |No |
--
-## Microsoft.Attestation/attestationProviders
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent message log category. |No |
-|NotProcessed |Requests which could not be processed. |Yes |
-|Operational |Operational message log category. |Yes |
--
-## Microsoft.Automation/automationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |AuditEvent |Yes |
-|DscNodeStatus |DscNodeStatus |No |
-|JobLogs |JobLogs |No |
-|JobStreams |JobStreams |No |
--
-## Microsoft.AutonomousDevelopmentPlatform/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
--
-## Microsoft.AutonomousDevelopmentPlatform/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|Operational |Operational |Yes |
-|Request |Request |Yes |
--
-## microsoft.avs/privateClouds
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|vmwaresyslog |VMware Syslog |Yes |
--
-## microsoft.azuresphere/catalogs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit Logs |Yes |
-|DeviceEvents |Device Events |Yes |
--
-## Microsoft.Batch/batchaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLog |Audit Logs |Yes |
-|ServiceLog |Service Logs |No |
-|ServiceLogs |Service Logs |Yes |
--
-## microsoft.botservice/botservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BotRequest |Requests from the channels to the bot |Yes |
--
-## Microsoft.Cache/redis
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectedClientList |Connected client list |Yes |
--
-## Microsoft.Cache/redisEnterprise/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectionEvents |Connection events (New Connection/Authentication/Disconnection) |Yes |
--
-## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WebApplicationFirewallLogs |Web Appliation Firewall Logs |No |
--
-## Microsoft.Cdn/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AzureCdnAccessLog |Azure Cdn Access Log |No |
-|FrontDoorAccessLog |FrontDoor Access Log |Yes |
-|FrontDoorHealthProbeLog |FrontDoor Health Probe Log |Yes |
-|FrontDoorWebApplicationFirewallLog |FrontDoor WebApplicationFirewall Log |Yes |
--
-## Microsoft.Cdn/profiles/endpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
--
-## Microsoft.Chaos/experiments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ExperimentOrchestration |Experiment Orchestration Events |Yes |
--
-## Microsoft.ClassicNetwork/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No |
--
-## Microsoft.CodeSigning/codesigningaccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SignTransactions |Sign Transactions |Yes |
--
-## Microsoft.CognitiveServices/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|RequestResponse |Request and Response Logs |No |
-|Trace |Trace Logs |No |
--
-## Microsoft.Communication/CommunicationServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuthOperational |Operational Authentication Logs |Yes |
-|CallAutomationOperational |Operational Call Automation Logs |Yes |
-|CallDiagnostics |Call Diagnostics Logs |Yes |
-|CallRecordingSummary |Call Recording Summary Logs |Yes |
-|CallSummary |Call Summary Logs |Yes |
-|ChatOperational |Operational Chat Logs |No |
-|EmailSendMailOperational |Email Service Send Mail Logs |Yes |
-|EmailStatusUpdateOperational |Email Service Delivery Status Update Logs |Yes |
-|EmailUserEngagementOperational |Email Service User Engagement Logs |Yes |
-|JobRouterOperational |Operational Job Router Logs |Yes |
-|NetworkTraversalDiagnostics |Network Traversal Relay Diagnostic Logs |Yes |
-|NetworkTraversalOperational |Operational Network Traversal Logs |Yes |
-|RoomsOperational |Operational Rooms Logs |Yes |
-|SMSOperational |Operational SMS Logs |No |
-|Usage |Usage Records |No |
--
-## Microsoft.Compute/virtualMachines
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SoftwareUpdateProfile |SoftwareUpdateProfile |Yes |
-|SoftwareUpdates |SoftwareUpdates |Yes |
--
-## Microsoft.ConfidentialLedger/ManagedCCF
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
--
-## Microsoft.ConfidentialLedger/ManagedCCFs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|applicationlogs |CCF Application Logs |Yes |
--
-## Microsoft.ConnectedCache/CacheNodes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
--
-## Microsoft.ConnectedCache/ispCustomers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Events |Events |Yes |
--
-## Microsoft.ConnectedVehicle/platformAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |MCVP Audit Logs |Yes |
-|Logs |MCVP Logs |Yes |
--
-## Microsoft.ContainerRegistry/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ContainerRegistryLoginEvents |Login Events |No |
-|ContainerRegistryRepositoryEvents |RepositoryEvent logs |No |
--
-## Microsoft.ContainerService/fleets
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |Yes |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |Yes |
-|kube-apiserver |Kubernetes API Server |Yes |
-|kube-audit |Kubernetes Audit |Yes |
-|kube-audit-admin |Kubernetes Audit Admin Logs |Yes |
-|kube-controller-manager |Kubernetes Controller Manager |Yes |
-|kube-scheduler |Kubernetes Scheduler |Yes |
--
-## Microsoft.ContainerService/managedClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
-|cluster-autoscaler |Kubernetes Cluster Autoscaler |No |
-|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
-|csi-azurefile-controller |csi-azurefile-controller |Yes |
-|csi-snapshot-controller |csi-snapshot-controller |Yes |
-|guard |guard |No |
-|kube-apiserver |Kubernetes API Server |No |
-|kube-audit |Kubernetes Audit |No |
-|kube-audit-admin |Kubernetes Audit Admin Logs |No |
-|kube-controller-manager |Kubernetes Controller Manager |No |
-|kube-scheduler |Kubernetes Scheduler |No |
--
-## Microsoft.CustomProviders/resourceproviders
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs for MiniRP calls |No |
--
-## Microsoft.D365CustomerInsights/instances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit events |No |
-|Operational |Operational events |No |
--
-## Microsoft.Dashboard/grafana
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GrafanaLoginEvents |Grafana Login Events |Yes |
--
-## Microsoft.Databricks/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|accounts |Databricks Accounts |No |
-|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
-|clamAVScan |Databricks Clam AV Scan |Yes |
-|clusterLibraries |Databricks Cluster Libraries |Yes |
-|clusters |Databricks Clusters |No |
-|databrickssql |Databricks DatabricksSQL |Yes |
-|dbfs |Databricks File System |No |
-|deltaPipelines |Databricks Delta Pipelines |Yes |
-|featureStore |Databricks Feature Store |Yes |
-|genie |Databricks Genie |Yes |
-|gitCredentials |Databricks Git Credentials |Yes |
-|globalInitScripts |Databricks Global Init Scripts |Yes |
-|iamRole |Databricks IAM Role |Yes |
-|instancePools |Instance Pools |No |
-|jobs |Databricks Jobs |No |
-|mlflowAcledArtifact |Databricks MLFlow Acled Artifact |Yes |
-|mlflowExperiment |Databricks MLFlow Experiment |Yes |
-|modelRegistry |Databricks Model Registry |Yes |
-|notebook |Databricks Notebook |No |
-|partnerHub |Databricks Partner Hub |Yes |
-|RemoteHistoryService |Databricks Remote History Service |Yes |
-|repos |Databricks Repos |Yes |
-|secrets |Databricks Secrets |No |
-|serverlessRealTimeInference |Databricks Serverless Real-Time Inference |Yes |
-|sqlanalytics |Databricks SQL Analytics |Yes |
-|sqlPermissions |Databricks SQLPermissions |No |
-|ssh |Databricks SSH |No |
-|unityCatalog |Databricks Unity Catalog |Yes |
-|webTerminal |Databricks Web Terminal |Yes |
-|workspace |Databricks Workspace |No |
--
-## Microsoft.DataCollaboration/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CollaborationAudit |Collaboration Audit |Yes |
-|Computations |Computations |Yes |
-|DataAssets |Data Assets |No |
-|Pipelines |Pipelines |No |
-|Pipelines |Pipelines |No |
-|Proposals |Proposals |No |
-|Scripts |Scripts |No |
--
-## Microsoft.DataFactory/factories
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ActivityRuns |Pipeline activity runs log |No |
-|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
-|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
-|AirflowTaskLogs |Airflow task execution logs |Yes |
-|AirflowWebLogs |Airflow web logs |Yes |
-|AirflowWorkerLogs |Airflow worker logs |Yes |
-|PipelineRuns |Pipeline runs log |No |
-|SandboxActivityRuns |Sandbox Activity runs log |Yes |
-|SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
-|SSISIntegrationRuntimeLogs |SSIS integration runtime logs |No |
-|SSISPackageEventMessageContext |SSIS package event message context |No |
-|SSISPackageEventMessages |SSIS package event messages |No |
-|SSISPackageExecutableStatistics |SSIS package executable statistics |No |
-|SSISPackageExecutionComponentPhases |SSIS package execution component phases |No |
-|SSISPackageExecutionDataStatistics |SSIS package exeution data statistics |No |
-|TriggerRuns |Trigger runs log |No |
--
-## Microsoft.DataLakeAnalytics/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|ConfigurationChange |Configuration Change Event Logs |Yes |
-|JobEvent |Job Event Logs |Yes |
-|JobInfo |Job Info Logs |Yes |
-|Requests |Request Logs |No |
--
-## Microsoft.DataLakeStore/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |No |
-|Requests |Request Logs |No |
--
-## Microsoft.DataProtection/BackupVaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AddonAzureBackupJobs |Addon Azure Backup Job Data |Yes |
-|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |Yes |
-|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |Yes |
-|CoreAzureBackup |Core Azure Backup Data |Yes |
--
-## Microsoft.DataShare/accounts
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ReceivedShareSnapshots |Received Share Snapshots |No |
-|SentShareSnapshots |Sent Share Snapshots |No |
-|Shares |Shares |No |
-|ShareSubscriptions |Share Subscriptions |No |
--
-## Microsoft.DBforMariaDB/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MariaDB Audit Logs |No |
-|MySqlSlowLogs |MariaDB Server Logs |No |
--
-## Microsoft.DBforMySQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Slow Logs |No |
--
-## Microsoft.DBforMySQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|MySqlAuditLogs |MySQL Audit Logs |No |
-|MySqlSlowLogs |MySQL Server Logs |No |
--
-## Microsoft.DBforPostgreSQL/flexibleServers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLFlexDatabaseXacts |PostgreSQL remaining transactions |Yes |
-|PostgreSQLFlexQueryStoreRuntime |PostgreSQL Query Store Runtime |Yes |
-|PostgreSQLFlexQueryStoreWaitStats |PostgreSQL Query Store Wait Statistics |Yes |
-|PostgreSQLFlexSessions |PostgreSQL Sessions data |Yes |
-|PostgreSQLFlexTableStats |PostgreSQL Autovacuum and schema statistics |Yes |
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
--
-## Microsoft.DBForPostgreSQL/serverGroupsv2
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |Yes |
--
-## Microsoft.DBforPostgreSQL/servers
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
-|QueryStoreRuntimeStatistics |PostgreSQL Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |PostgreSQL Query Store Wait Statistics |No |
--
-## Microsoft.DBforPostgreSQL/serversv2
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PostgreSQLLogs |PostgreSQL Server Logs |No |
--
-## Microsoft.DesktopVirtualization/applicationgroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Management |Management |No |
--
-## Microsoft.DesktopVirtualization/hostpools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AgentHealthStatus |AgentHealthStatus |No |
-|AutoscaleEvaluationPooled |Do not use - internal testing |Yes |
-|Checkpoint |Checkpoint |No |
-|Connection |Connection |No |
-|ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
-|Error |Error |No |
-|HostRegistration |HostRegistration |No |
-|Management |Management |No |
-|NetworkData |Network Data Logs |Yes |
-|SessionHostManagement |Session Host Management Activity Logs |Yes |
--
-## Microsoft.DesktopVirtualization/scalingplans
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Autoscale logs |Yes |
--
-## Microsoft.DesktopVirtualization/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Checkpoint |Checkpoint |No |
-|Error |Error |No |
-|Feed |Feed |No |
-|Management |Management |No |
--
-## Microsoft.DevCenter/devcenters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataplaneAuditEvent |Dataplane audit logs |Yes |
--
-## Microsoft.Devices/IotHubs
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|C2DCommands |C2D Commands |No |
-|C2DTwinOperations |C2D Twin Operations |No |
-|Configurations |Configurations |No |
-|Connections |Connections |No |
-|D2CTwinOperations |D2CTwinOperations |No |
-|DeviceIdentityOperations |Device Identity Operations |No |
-|DeviceStreams |Device Streams (Preview) |No |
-|DeviceTelemetry |Device Telemetry |No |
-|DirectMethods |Direct Methods |No |
-|DistributedTracing |Distributed Tracing (Preview) |No |
-|FileUploadOperations |File Upload Operations |No |
-|JobsOperations |Jobs Operations |No |
-|Routes |Routes |No |
-|TwinQueries |Twin Queries |No |
--
-## Microsoft.Devices/provisioningServices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeviceOperations |Device Operations |No |
-|ServiceOperations |Service Operations |No |
--
-## Microsoft.DigitalTwins/digitalTwinsInstances
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataHistoryOperation |DataHistoryOperation |Yes |
-|DigitalTwinsOperation |DigitalTwinsOperation |No |
-|EventRoutesOperation |EventRoutesOperation |No |
-|ModelsOperation |ModelsOperation |No |
-|QueryOperation |QueryOperation |No |
-|ResourceProviderOperation |ResourceProviderOperation |Yes |
--
-## Microsoft.DocumentDB/cassandraClusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraAudit |CassandraAudit |Yes |
-|CassandraLogs |CassandraLogs |Yes |
--
-## Microsoft.DocumentDB/DatabaseAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CassandraRequests |CassandraRequests |No |
-|ControlPlaneRequests |ControlPlaneRequests |No |
-|DataPlaneRequests |DataPlaneRequests |No |
-|GremlinRequests |GremlinRequests |No |
-|MongoRequests |MongoRequests |No |
-|PartitionKeyRUConsumption |PartitionKeyRUConsumption |No |
-|PartitionKeyStatistics |PartitionKeyStatistics |No |
-|QueryRuntimeStatistics |QueryRuntimeStatistics |No |
-|TableApiRequests |TableApiRequests |Yes |
--
-## Microsoft.EventGrid/domains
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
--
-## Microsoft.EventGrid/partnerNamespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|PublishFailures |Publish Failure Logs |No |
--
-## Microsoft.EventGrid/partnerTopics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
--
-## Microsoft.EventGrid/systemTopics
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DeliveryFailures |Delivery Failure Logs |No |
--
-## Microsoft.EventGrid/topics
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataPlaneRequests |Data plane operations logs |Yes |
-|DeliveryFailures |Delivery Failure Logs |No |
-|PublishFailures |Publish Failure Logs |No |
--
-## Microsoft.EventHub/Namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationMetricsLogs |Application Metrics Logs |Yes |
-|ArchiveLogs |Archive Logs |No |
-|AutoScaleLogs |Auto Scale Logs |No |
-|CustomerManagedKeyUserLogs |Customer Managed Key Logs |No |
-|EventHubVNetConnectionEvent |VNet/IP Filtering Connection Logs |No |
-|KafkaCoordinatorLogs |Kafka Coordinator Logs |No |
-|KafkaUserErrorLogs |Kafka User Error Logs |No |
-|OperationalLogs |Operational Logs |No |
-|RuntimeAuditLogs |Runtime Audit Logs |Yes |
--
-## Microsoft.HealthcareApis/services
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |No |
-|DiagnosticLogs |Diagnostic logs |Yes |
--
-## Microsoft.HealthcareApis/workspaces/analyticsconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes |
--
-## Microsoft.HealthcareApis/workspaces/dicomservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |Audit logs |Yes |
-|DiagnosticLogs |Diagnostic logs |Yes |
--
-## Microsoft.HealthcareApis/workspaces/fhirservices
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |FHIR Audit logs |Yes |
--
-## Microsoft.HealthcareApis/workspaces/iotconnectors
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DiagnosticLogs |Diagnostic logs |Yes |
--
-## microsoft.insights/autoscalesettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AutoscaleEvaluations |Autoscale Evaluations |No |
-|AutoscaleScaleActions |Autoscale Scale Actions |No |
--
-## microsoft.insights/components
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |No |
-|AppBrowserTimings |Browser timings |No |
-|AppDependencies |Dependencies |No |
-|AppEvents |Events |No |
-|AppExceptions |Exceptions |No |
-|AppMetrics |Metrics |No |
-|AppPageViews |Page views |No |
-|AppPerformanceCounters |Performance counters |No |
-|AppRequests |Requests |No |
-|AppSystemEvents |System events |No |
-|AppTraces |Traces |No |
--
-## microsoft.keyvault/managedhsms
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Event |No |
--
-## Microsoft.KeyVault/vaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditEvent |Audit Logs |No |
-|AzurePolicyEvaluationDetails |Azure Policy Evaluation Details |Yes |
--
-## Microsoft.Kusto/clusters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command |Command |No |
-|FailedIngestion |Failed ingestion |No |
-|IngestionBatching |Ingestion batching |No |
-|Journal |Journal |Yes |
-|Query |Query |No |
-|SucceededIngestion |Succeeded ingestion |No |
-|TableDetails |Table details |No |
-|TableUsageStatistics |Table usage statistics |No |
--
-## microsoft.loadtestservice/loadtests
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationLogs |Azure Load Testing Operations |Yes |
--
-## Microsoft.Logic/IntegrationAccounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|IntegrationAccountTrackingEvents |Integration Account track events |No |
--
-## Microsoft.Logic/Workflows
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|WorkflowRuntime |Workflow runtime diagnostic events |No |
--
-## Microsoft.MachineLearningServices/registries
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
-|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
--
-## Microsoft.MachineLearningServices/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlComputeClusterEvent |AmlComputeClusterEvent |No |
-|AmlComputeClusterNodeEvent |AmlComputeClusterNodeEvent |Yes |
-|AmlComputeCpuGpuUtilization |AmlComputeCpuGpuUtilization |No |
-|AmlComputeJobEvent |AmlComputeJobEvent |No |
-|AmlRunStatusChangedEvent |AmlRunStatusChangedEvent |No |
-|ComputeInstanceEvent |ComputeInstanceEvent |Yes |
-|DataLabelChangeEvent |DataLabelChangeEvent |Yes |
-|DataLabelReadEvent |DataLabelReadEvent |Yes |
-|DataSetChangeEvent |DataSetChangeEvent |Yes |
-|DataSetReadEvent |DataSetReadEvent |Yes |
-|DataStoreChangeEvent |DataStoreChangeEvent |Yes |
-|DataStoreReadEvent |DataStoreReadEvent |Yes |
-|DeploymentEventACI |DeploymentEventACI |Yes |
-|DeploymentEventAKS |DeploymentEventAKS |Yes |
-|DeploymentReadEvent |DeploymentReadEvent |Yes |
-|EnvironmentChangeEvent |EnvironmentChangeEvent |Yes |
-|EnvironmentReadEvent |EnvironmentReadEvent |Yes |
-|InferencingOperationACI |InferencingOperationACI |Yes |
-|InferencingOperationAKS |InferencingOperationAKS |Yes |
-|ModelsActionEvent |ModelsActionEvent |Yes |
-|ModelsChangeEvent |ModelsChangeEvent |Yes |
-|ModelsReadEvent |ModelsReadEvent |Yes |
-|PipelineChangeEvent |PipelineChangeEvent |Yes |
-|PipelineReadEvent |PipelineReadEvent |Yes |
-|RunEvent |RunEvent |Yes |
-|RunReadEvent |RunReadEvent |Yes |
--
-## Microsoft.MachineLearningServices/workspaces/onlineEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
-|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
-|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
--
-## Microsoft.ManagedNetworkFabric/networkDevices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppAvailabilityResults |Availability results |Yes |
-|AppBrowserTimings |Browser timings |Yes |
--
-## Microsoft.Media/mediaservices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|KeyDeliveryRequests |Key Delivery Requests |No |
-|MediaAccount |Media Account Health Status |Yes |
--
-## Microsoft.Media/mediaservices/liveEvents
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LiveEventState |Live Event Operations |Yes |
--
-## Microsoft.Media/mediaservices/streamingEndpoints
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StreamingEndpointRequests |Streaming Endpoint Requests |Yes |
--
-## Microsoft.Media/videoanalyzers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit Logs |Yes |
-|Diagnostics |Diagnostics Logs |Yes |
-|Operational |Operational Logs |Yes |
--
-## Microsoft.NetApp/netAppAccounts/capacityPools
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Autoscale |Capacity Pool Autoscaled |Yes |
--
-## Microsoft.NetApp/netAppAccounts/capacityPools/volumes
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ANFFileAccess |ANF File Access |Yes |
--
-## Microsoft.Network/applicationgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationGatewayAccessLog |Application Gateway Access Log |No |
-|ApplicationGatewayFirewallLog |Application Gateway Firewall Log |No |
-|ApplicationGatewayPerformanceLog |Application Gateway Performance Log |No |
--
-## Microsoft.Network/azureFirewalls
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AZFWApplicationRule |Azure Firewall Application Rule |Yes |
-|AZFWApplicationRuleAggregation |Azure Firewall Network Rule Aggregation (Policy Analytics) |Yes |
-|AZFWDnsQuery |Azure Firewall DNS query |Yes |
-|AZFWFatFlow |Azure Firewall Fat Flow Log |Yes |
-|AZFWFlowTrace |Azure Firewall Flow Trace Log |Yes |
-|AZFWFqdnResolveFailure |Azure Firewall FQDN Resolution Failure |Yes |
-|AZFWIdpsSignature |Azure Firewall IDPS Signature |Yes |
-|AZFWNatRule |Azure Firewall Nat Rule |Yes |
-|AZFWNatRuleAggregation |Azure Firewall Nat Rule Aggregation (Policy Analytics) |Yes |
-|AZFWNetworkRule |Azure Firewall Network Rule |Yes |
-|AZFWNetworkRuleAggregation |Azure Firewall Application Rule Aggregation (Policy Analytics) |Yes |
-|AZFWThreatIntel |Azure Firewall Threat Intelligence |Yes |
-|AzureFirewallApplicationRule |Azure Firewall Application Rule (Legacy Azure Diagnostics) |No |
-|AzureFirewallDnsProxy |Azure Firewall DNS Proxy (Legacy Azure Diagnostics) |No |
-|AzureFirewallNetworkRule |Azure Firewall Network Rule (Legacy Azure Diagnostics) |No |
--
-## microsoft.network/bastionHosts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BastionAuditLogs |Bastion Audit Logs |No |
--
-## Microsoft.Network/expressRouteCircuits
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|PeeringRouteLog |Peering Route Table Logs |No |
--
-## Microsoft.Network/frontdoors
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|FrontdoorAccessLog |Frontdoor Access Log |No |
-|FrontdoorWebApplicationFirewallLog |Frontdoor Web Application Firewall Log |No |
--
-## Microsoft.Network/loadBalancers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|LoadBalancerAlertEvent |Load Balancer Alert Events |No |
-|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
--
-## Microsoft.Network/networkManagers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
--
-## Microsoft.Network/networksecuritygroups
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NetworkSecurityGroupEvent |Network Security Group Event |No |
-|NetworkSecurityGroupFlowEvent |Network Security Group Rule Flow Event |No |
-|NetworkSecurityGroupRuleCounter |Network Security Group Rule Counter |No |
--
-## Microsoft.Network/networkSecurityPerimeters
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
-|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
-|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
-|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
-|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
-|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes |
-|NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes |
-|NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
-|NspPublicInboundResourceRulesAllowed |Public inbound access allowed by PaaS resource rules. |Yes |
-|NspPublicInboundResourceRulesDenied |Public inbound access denied by PaaS resource rules. |Yes |
-|NspPublicOutboundPerimeterRulesAllowed |Public outbound access allowed by NSP access rules. |Yes |
-|NspPublicOutboundPerimeterRulesDenied |Public outbound access denied by NSP access rules. |Yes |
-|NspPublicOutboundResourceRulesAllowed |Public outbound access allowed by PaaS resource rules. |Yes |
-|NspPublicOutboundResourceRulesDenied |Public outbound access denied by PaaS resource rules |Yes |
--
-## Microsoft.Network/networkSecurityPerimeters/profiles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|NSPInboundAccessAllowed |NSP Inbound Access Allowed. |Yes |
-|NSPInboundAccessDenied |NSP Inbound Access Denied. |Yes |
-|NSPOutboundAccessAllowed |NSP Outbound Access Allowed. |Yes |
-|NSPOutboundAccessDenied |NSP Outbound Access Denied. |Yes |
--
-## microsoft.network/p2svpngateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|P2SDiagnosticLog |P2S Diagnostic Logs |No |
--
-## Microsoft.Network/publicIPAddresses
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DDoSMitigationFlowLogs |Flow logs of DDoS mitigation decisions |No |
-|DDoSMitigationReports |Reports of DDoS mitigations |No |
-|DDoSProtectionNotifications |DDoS protection notifications |No |
--
-## Microsoft.Network/trafficManagerProfiles
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ProbeHealthStatusEvents |Traffic Manager Probe Health Results Event |No |
--
-## microsoft.network/virtualnetworkgateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|P2SDiagnosticLog |P2S Diagnostic Logs |No |
-|RouteDiagnosticLog |Route Diagnostic Logs |No |
-|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
--
-## Microsoft.Network/virtualNetworks
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|VMProtectionAlerts |VM protection alerts |No |
--
-## microsoft.network/vpngateways
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
-|IKEDiagnosticLog |IKE Diagnostic Logs |No |
-|RouteDiagnosticLog |Route Diagnostic Logs |No |
-|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
--
-## Microsoft.NetworkFunction/azureTrafficCollectors
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ExpressRouteCircuitIpfix |Express Route Circuit IPFIX Flow Records |Yes |
--
-## Microsoft.NotificationHubs/namespaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationalLogs |Operational Logs |No |
--
-## MICROSOFT.OPENENERGYPLATFORM/ENERGYSERVICES
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AirFlowTaskLogs |Air Flow Task Logs |Yes |
-|ElasticOperatorLogs |Elastic Operator Logs |Yes |
-|ElasticsearchLogs |Elasticsearch Logs |Yes |
--
-## Microsoft.OpenLogisticsPlatform/Workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|SupplyChainEntityOperations |Supply Chain Entity Operations |Yes |
-|SupplyChainEventLogs |Supply Chain Event logs |Yes |
--
-## Microsoft.OperationalInsights/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |No |
--
-## Microsoft.PlayFab/titles
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AuditLogs |AuditLogs |Yes |
--
-## Microsoft.PowerBI/tenants
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
--
-## Microsoft.PowerBI/tenants/workspaces
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
--
-## Microsoft.PowerBIDedicated/capacities
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Engine |Engine |No |
--
-## microsoft.purview/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DataSensitivityLogEvent |DataSensitivity |Yes |
-|ScanStatusLogEvent |ScanStatus |No |
-|Security |PurviewAccountAuditEvents |Yes |
--
-## Microsoft.RecoveryServices/Vaults
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AddonAzureBackupAlerts |Addon Azure Backup Alert Data |No |
-|AddonAzureBackupJobs |Addon Azure Backup Job Data |No |
-|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |No |
-|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |No |
-|AddonAzureBackupStorage |Addon Azure Backup Storage Data |No |
-|ASRReplicatedItems |Azure Site Recovery Replicated Items Details |Yes |
-|AzureBackupReport |Azure Backup Reporting Data |No |
-|AzureSiteRecoveryEvents |Azure Site Recovery Events |No |
-|AzureSiteRecoveryJobs |Azure Site Recovery Jobs |No |
-|AzureSiteRecoveryProtectedDiskDataChurn |Azure Site Recovery Protected Disk Data Churn |No |
-|AzureSiteRecoveryRecoveryPoints |Azure Site Recovery Recovery Points |No |
-|AzureSiteRecoveryReplicatedItems |Azure Site Recovery Replicated Items |No |
-|AzureSiteRecoveryReplicationDataUploadRate |Azure Site Recovery Replication Data Upload Rate |No |
-|AzureSiteRecoveryReplicationStats |Azure Site Recovery Replication Stats |No |
-|CoreAzureBackup |Core Azure Backup Data |No |
--
-## Microsoft.Relay/namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|HybridConnectionsEvent |HybridConnections Events |No |
-|HybridConnectionsLogs |HybridConnectionsLogs |Yes |
--
-## Microsoft.Search/searchServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|OperationLogs |Operation Logs |No |
--
-## Microsoft.Security/antiMalwareSettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScanResults |AntimalwareScanResults |Yes |
--
-## Microsoft.Security/defenderForStorageSettings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScanResults |AntimalwareScanResults |Yes |
--
-## microsoft.securityinsights/settings
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Analytics |Analytics |Yes |
-|Automation |Automation |Yes |
-|DataConnectors |Data Collection - Connectors |Yes |
--
-## Microsoft.ServiceBus/Namespaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ApplicationMetricsLogs |Application Metrics Logs(Unused) |Yes |
-|OperationalLogs |Operational Logs |No |
-|RuntimeAuditLogs |Runtime Audit Logs |Yes |
-|VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |No |
--
-## Microsoft.SignalRService/SignalR
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AllLogs |Azure SignalR Service Logs. |No |
--
-## Microsoft.SignalRService/WebPubSub
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ConnectivityLogs |Connectivity logs for Azure Web PubSub Service. |Yes |
-|HttpRequestLogs |Http Request logs for Azure Web PubSub Service. |Yes |
-|MessagingLogs |Messaging logs for Azure Web PubSub Service. |Yes |
--
-## microsoft.singularity/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Activity |Activity Logs |Yes |
-|Execution |Execution Logs |Yes |
--
-## Microsoft.Sql/managedInstances
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DevOpsOperationsAudit |Devops operations Audit Logs |No |
-|ResourceUsageStats |Resource Usage Statistics |No |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
--
-## Microsoft.Sql/managedInstances/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Errors |Errors |No |
-|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
-|SQLInsights |SQL Insights |No |
--
-## Microsoft.Sql/servers/databases
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AutomaticTuning |Automatic tuning |No |
-|Blocks |Blocks |No |
-|DatabaseWaitStatistics |Database Wait Statistics |No |
-|Deadlocks |Deadlocks |No |
-|DevOpsOperationsAudit |Devops operations Audit Logs |No |
-|DmsWorkers |Dms Workers |No |
-|Errors |Errors |No |
-|ExecRequests |Exec Requests |No |
-|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
-|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
-|RequestSteps |Request Steps |No |
-|SQLInsights |SQL Insights |No |
-|SqlRequests |Sql Requests |No |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
-|Timeouts |Timeouts |No |
-|Waits |Waits |No |
--
-## Microsoft.Storage/storageAccounts/blobServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
--
-## Microsoft.Storage/storageAccounts/fileServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
--
-## Microsoft.Storage/storageAccounts/queueServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
--
-## Microsoft.Storage/storageAccounts/tableServices
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|StorageDelete |StorageDelete |Yes |
-|StorageRead |StorageRead |Yes |
-|StorageWrite |StorageWrite |Yes |
--
-## Microsoft.StorageCache/caches
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AscCacheOperationEvent |HPC Cache operation event |Yes |
-|AscUpgradeEvent |HPC Cache upgrade event |Yes |
-|AscWarningEvent |HPC Cache warning |Yes |
--
-## Microsoft.StorageMover/storageMovers
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|CopyLogsFailed |Copy logs - Failed |Yes |
-|JobRunLogs |Job run logs |Yes |
--
-## Microsoft.StreamAnalytics/streamingjobs
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Authoring |Authoring |No |
-|Execution |Execution |No |
--
-## Microsoft.Synapse/workspaces
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BuiltinSqlReqsEnded |Built-in Sql Pool Requests Ended |No |
-|GatewayApiRequests |Synapse Gateway Api Requests |No |
-|IntegrationActivityRuns |Integration Activity Runs |Yes |
-|IntegrationPipelineRuns |Integration Pipeline Runs |Yes |
-|IntegrationTriggerRuns |Integration Trigger Runs |Yes |
-|SQLSecurityAuditEvents |SQL Security Audit Event |No |
-|SynapseLinkEvent |Synapse Link Event |Yes |
-|SynapseRbacOperations |Synapse RBAC Operations |No |
--
-## Microsoft.Synapse/workspaces/bigDataPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|BigDataPoolAppEvents |Big Data Pool Applications Execution Metrics |Yes |
-|BigDataPoolAppsEnded |Big Data Pool Applications Ended |No |
-|BigDataPoolBlockManagerEvents |Big Data Pool Block Manager Events |Yes |
-|BigDataPoolDriverLogs |Big Data Pool Driver Logs |Yes |
-|BigDataPoolEnvironmentEvents |Big Data Pool Environment Events |Yes |
-|BigDataPoolExecutorEvents |Big Data Pool Executor Events |Yes |
-|BigDataPoolExecutorLogs |Big Data Pool Executor Logs |Yes |
-|BigDataPoolJobEvents |Big Data Pool Job Events |Yes |
-|BigDataPoolSqlExecutionEvents |Big Data Pool Sql Execution Events |Yes |
-|BigDataPoolStageEvents |Big Data Pool Stage Events |Yes |
-|BigDataPoolTaskEvents |Big Data Pool Task Events |Yes |
--
-## Microsoft.Synapse/workspaces/kustoPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command |Synapse Data Explorer Command |Yes |
-|FailedIngestion |Synapse Data Explorer Failed Ingestion |Yes |
-|IngestionBatching |Synapse Data Explorer Ingestion Batching |Yes |
-|Query |Synapse Data Explorer Query |Yes |
-|SucceededIngestion |Synapse Data Explorer Succeeded Ingestion |Yes |
-|TableDetails |Synapse Data Explorer Table Details |Yes |
-|TableUsageStatistics |Synapse Data Explorer Table Usage Statistics |Yes |
--
-## Microsoft.Synapse/workspaces/scopePools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ScopePoolScopeJobsEnded |Scope Pool Scope Jobs Ended |Yes |
-|ScopePoolScopeJobsStateChange |Scope Pool Scope Jobs State Change |Yes |
--
-## Microsoft.Synapse/workspaces/sqlPools
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|DmsWorkers |Dms Workers |No |
-|ExecRequests |Exec Requests |No |
-|RequestSteps |Request Steps |No |
-|SqlRequests |Sql Requests |No |
-|SQLSecurityAuditEvents |Sql Security Audit Event |No |
-|Waits |Waits |No |
--
-## Microsoft.TimeSeriesInsights/environments
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Ingress |Ingress |No |
-|Management |Management |No |
--
-## Microsoft.TimeSeriesInsights/environments/eventsources
-<!-- Data source : arm-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Ingress |Ingress |No |
-|Management |Management |No |
--
-## microsoft.videoindexer/accounts
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Audit |Audit |Yes |
-|IndexingLogs |Indexing Logs |Yes |
--
-## Microsoft.Web/hostingEnvironments
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceEnvironmentPlatformLogs |App Service Environment Platform Logs |No |
--
-## Microsoft.Web/sites
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
-|AppServiceAppLogs |App Service Application Logs |No |
-|AppServiceAuditLogs |Access Audit Logs |No |
-|AppServiceConsoleLogs |App Service Console Logs |No |
-|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
-|AppServiceHTTPLogs |HTTP logs |No |
-|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
-|AppServicePlatformLogs |App Service Platform logs |No |
-|FunctionAppLogs |Function Application Logs |No |
-|WorkflowRuntime |Workflow Runtime Logs |Yes |
--
-## Microsoft.Web/sites/slots
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
-|AppServiceAppLogs |App Service Application Logs |No |
-|AppServiceAuditLogs |Access Audit Logs |No |
-|AppServiceConsoleLogs |App Service Console Logs |No |
-|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
-|AppServiceHTTPLogs |HTTP logs |No |
-|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
-|AppServicePlatformLogs |App Service Platform logs |No |
-|FunctionAppLogs |Function Application Logs |No |
--
-## microsoft.workloads/sapvirtualinstances
-<!-- Data source : naam-->
-
-|Category|Category Display Name|Costs To Export|
-||||
-|ChangeDetection |Change Detection |Yes |
--
-## Next Steps
-
-* [Learn more about resource logs](../essentials/platform-logs-overview.md)
-* [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs)
-* [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
-* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
--
-<!--Gen Date: Sun Apr 02 2023 09:56:30 GMT+0300 (Israel Daylight Time)-->
+
+ Title: Supported categories for Azure Monitor resource logs
+description: Understand the supported services and event schemas for Azure Monitor resource logs.
++ Last updated : 04/13/2023+++++
+# Supported categories for Azure Monitor resource logs
+
+> [!NOTE]
+> This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
+
+[Azure Monitor resource logs](../essentials/platform-logs-overview.md) are logs emitted by Azure services that describe the operation of those services or resources. All resource logs available through Azure Monitor share a common top-level schema. Each service has the flexibility to emit unique properties for its own events.
+
+Resource logs were previously known as diagnostic logs. The name was changed in October 2019 as the types of logs gathered by Azure Monitor shifted to include more than just the Azure resource.
+
+A combination of the resource type (available in the `resourceId` property) and the category uniquely identifies a schema. There's a common schema for all resource logs with service-specific fields then added for different log categories. For more information, see [Common and service-specific schema for Azure resource logs](./resource-logs-schema.md).
+
+## Costs
+
+[Azure Monitor Log Analytics](https://azure.microsoft.com/pricing/details/monitor/), [Azure Storage](https://azure.microsoft.com/product-categories/storage/), [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/), and partners who integrate directly with Azure Monitor (for example, [Datadog](../../partner-solutions/datadog/overview.md)) have costs associated with ingesting data and storing data. Check the pricing pages linked in the previous sentence to understand the costs for those services. Resource logs are just one type of data that you can send to those locations.
+
+In addition, there might be costs to export some categories of resource logs to those locations. Logs with possible export costs are listed in the table in the next section. For more information on export pricing, see the **Platform Logs** section on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+
+## Supported log categories per resource type
+
+Following is a list of the types of logs available for each resource type.
+
+Some categories might be supported only for specific types of resources. See the resource-specific documentation if you feel you're missing a resource. For example, Microsoft.Sql/servers/databases categories aren't available for all types of databases. For more information, see [information on SQL Database diagnostic logging](/azure/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure).
+
+If you think something is missing, you can open a GitHub comment at the bottom of this article.
+
+## Microsoft.AAD/DomainServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AccountLogon |AccountLogon |No |
+|AccountManagement |AccountManagement |No |
+|DetailTracking |DetailTracking |No |
+|DirectoryServiceAccess |DirectoryServiceAccess |No |
+|LogonLogoff |LogonLogoff |No |
+|ObjectAccess |ObjectAccess |No |
+|PolicyChange |PolicyChange |No |
+|PrivilegeUse |PrivilegeUse |No |
+|SystemSecurity |SystemSecurity |No |
+
+## microsoft.aadiam/tenants
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Signin |Signin |Yes |
+
+## Microsoft.AgFoodPlatform/farmBeats
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationAuditLogs |Application Audit Logs |Yes |
+|FarmManagementLogs |Farm Management Logs |Yes |
+|FarmOperationLogs |Farm Operation Logs |Yes |
+|InsightLogs |Insight Logs |Yes |
+|JobProcessedLogs |Job Processed Logs |Yes |
+|ModelInferenceLogs |Model Inference Logs |Yes |
+|ProviderAuthLogs |Provider Auth Logs |Yes |
+|SatelliteLogs |Satellite Logs |Yes |
+|SensorManagementLogs |Sensor Management Logs |Yes |
+|WeatherLogs |Weather Logs |Yes |
+
+## Microsoft.AnalysisServices/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+|Service |Service |No |
+
+## Microsoft.ApiManagement/service
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayLogs |Logs related to ApiManagement Gateway |No |
+|WebSocketConnectionLogs |Logs related to Websocket Connections |Yes |
+
+## Microsoft.App/managedEnvironments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppEnvSpringAppConsoleLogs |Spring App console logs |Yes |
+|ContainerAppConsoleLogs |Container App console logs |Yes |
+|ContainerAppSystemLogs |Container App system logs |Yes |
+
+## Microsoft.AppConfiguration/configurationStores
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|HttpRequest |HTTP Requests |Yes |
+
+## Microsoft.AppPlatform/Spring
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationConsole |Application Console |No |
+|BuildLogs |Build Logs |Yes |
+|ContainerEventLogs |Container Event Logs |Yes |
+|IngressLogs |Ingress Logs |Yes |
+|SystemLogs |System Logs |No |
+
+## Microsoft.Attestation/attestationProviders
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |AuditEvent message log category. |No |
+|NotProcessed |Requests which could not be processed. |Yes |
+|Operational |Operational message log category. |Yes |
+
+## Microsoft.Automation/automationAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |AuditEvent |Yes |
+|DscNodeStatus |DscNodeStatus |No |
+|JobLogs |JobLogs |No |
+|JobStreams |JobStreams |No |
+
+## Microsoft.AutonomousDevelopmentPlatform/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|Operational |Operational |Yes |
+|Request |Request |Yes |
+
+## Microsoft.AutonomousDevelopmentPlatform/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|Operational |Operational |Yes |
+|Request |Request |Yes |
+
+## microsoft.avs/privateClouds
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|vmwaresyslog |VMware Syslog |Yes |
+
+## microsoft.azuresphere/catalogs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit Logs |Yes |
+|DeviceEvents |Device Events |Yes |
+
+## Microsoft.Batch/batchaccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLog |Audit Logs |Yes |
+|ServiceLog |Service Logs |No |
+|ServiceLogs |Service Logs |Yes |
+
+## microsoft.botservice/botservices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BotRequest |Requests from the channels to the bot |Yes |
+
+## Microsoft.Cache/redis
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectedClientList |Connected client list |Yes |
+
+## Microsoft.Cache/redisEnterprise/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectionEvents |Connection events (New Connection/Authentication/Disconnection) |Yes |
+
+## Microsoft.Cdn/cdnwebapplicationfirewallpolicies
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|WebApplicationFirewallLogs |Web Appliation Firewall Logs |No |
+
+## Microsoft.Cdn/profiles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AzureCdnAccessLog |Azure Cdn Access Log |No |
+|FrontDoorAccessLog |FrontDoor Access Log |Yes |
+|FrontDoorHealthProbeLog |FrontDoor Health Probe Log |Yes |
+|FrontDoorWebApplicationFirewallLog |FrontDoor WebApplicationFirewall Log |Yes |
+
+## Microsoft.Cdn/profiles/endpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CoreAnalytics |Gets the metrics of the endpoint, e.g., bandwidth, egress, etc. |No |
+
+## Microsoft.Chaos/experiments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ExperimentOrchestration |Experiment Orchestration Events |Yes |
+
+## Microsoft.ClassicNetwork/networksecuritygroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No |
+
+## Microsoft.CodeSigning/codesigningaccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SignTransactions |Sign Transactions |Yes |
+
+## Microsoft.CognitiveServices/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|RequestResponse |Request and Response Logs |No |
+|Trace |Trace Logs |No |
+
+## Microsoft.Communication/CommunicationServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuthOperational |Operational Authentication Logs |Yes |
+|CallAutomationOperational |Operational Call Automation Logs |Yes |
+|CallDiagnostics |Call Diagnostics Logs |Yes |
+|CallRecordingSummary |Call Recording Summary Logs |Yes |
+|CallSummary |Call Summary Logs |Yes |
+|ChatOperational |Operational Chat Logs |No |
+|EmailSendMailOperational |Email Service Send Mail Logs |Yes |
+|EmailStatusUpdateOperational |Email Service Delivery Status Update Logs |Yes |
+|EmailUserEngagementOperational |Email Service User Engagement Logs |Yes |
+|JobRouterOperational |Operational Job Router Logs |Yes |
+|NetworkTraversalDiagnostics |Network Traversal Relay Diagnostic Logs |Yes |
+|NetworkTraversalOperational |Operational Network Traversal Logs |Yes |
+|RoomsOperational |Operational Rooms Logs |Yes |
+|SMSOperational |Operational SMS Logs |No |
+|Usage |Usage Records |No |
+
+## Microsoft.Compute/virtualMachines
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SoftwareUpdateProfile |SoftwareUpdateProfile |Yes |
+|SoftwareUpdates |SoftwareUpdates |Yes |
+
+## Microsoft.ConfidentialLedger/ManagedCCF
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|applicationlogs |CCF Application Logs |Yes |
+
+## Microsoft.ConfidentialLedger/ManagedCCFs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|applicationlogs |CCF Application Logs |Yes |
+
+## Microsoft.ConnectedCache/CacheNodes
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Events |Events |Yes |
+
+## Microsoft.ConnectedCache/ispCustomers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Events |Events |Yes |
+
+## Microsoft.ConnectedVehicle/platformAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |MCVP Audit Logs |Yes |
+|Logs |MCVP Logs |Yes |
+
+## Microsoft.ContainerRegistry/registries
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ContainerRegistryLoginEvents |Login Events |No |
+|ContainerRegistryRepositoryEvents |RepositoryEvent logs |No |
+
+## Microsoft.ContainerService/fleets
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
+|cluster-autoscaler |Kubernetes Cluster Autoscaler |Yes |
+|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
+|csi-azurefile-controller |csi-azurefile-controller |Yes |
+|csi-snapshot-controller |csi-snapshot-controller |Yes |
+|guard |guard |Yes |
+|kube-apiserver |Kubernetes API Server |Yes |
+|kube-audit |Kubernetes Audit |Yes |
+|kube-audit-admin |Kubernetes Audit Admin Logs |Yes |
+|kube-controller-manager |Kubernetes Controller Manager |Yes |
+|kube-scheduler |Kubernetes Scheduler |Yes |
+
+## Microsoft.ContainerService/managedClusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|cloud-controller-manager |Kubernetes Cloud Controller Manager |Yes |
+|cluster-autoscaler |Kubernetes Cluster Autoscaler |No |
+|csi-azuredisk-controller |csi-azuredisk-controller |Yes |
+|csi-azurefile-controller |csi-azurefile-controller |Yes |
+|csi-snapshot-controller |csi-snapshot-controller |Yes |
+|guard |guard |No |
+|kube-apiserver |Kubernetes API Server |No |
+|kube-audit |Kubernetes Audit |No |
+|kube-audit-admin |Kubernetes Audit Admin Logs |No |
+|kube-controller-manager |Kubernetes Controller Manager |No |
+|kube-scheduler |Kubernetes Scheduler |No |
+
+## Microsoft.CustomProviders/resourceproviders
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs for MiniRP calls |No |
+
+## Microsoft.D365CustomerInsights/instances
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit events |No |
+|Operational |Operational events |No |
+
+## Microsoft.Dashboard/grafana
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GrafanaLoginEvents |Grafana Login Events |Yes |
+
+## Microsoft.Databricks/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|accounts |Databricks Accounts |No |
+|capsule8Dataplane |Databricks Capsule8 Container Security Scanning Reports |Yes |
+|clamAVScan |Databricks Clam AV Scan |Yes |
+|clusterLibraries |Databricks Cluster Libraries |Yes |
+|clusters |Databricks Clusters |No |
+|databrickssql |Databricks DatabricksSQL |Yes |
+|dbfs |Databricks File System |No |
+|deltaPipelines |Databricks Delta Pipelines |Yes |
+|featureStore |Databricks Feature Store |Yes |
+|genie |Databricks Genie |Yes |
+|gitCredentials |Databricks Git Credentials |Yes |
+|globalInitScripts |Databricks Global Init Scripts |Yes |
+|iamRole |Databricks IAM Role |Yes |
+|instancePools |Instance Pools |No |
+|jobs |Databricks Jobs |No |
+|mlflowAcledArtifact |Databricks MLFlow Acled Artifact |Yes |
+|mlflowExperiment |Databricks MLFlow Experiment |Yes |
+|modelRegistry |Databricks Model Registry |Yes |
+|notebook |Databricks Notebook |No |
+|partnerHub |Databricks Partner Hub |Yes |
+|RemoteHistoryService |Databricks Remote History Service |Yes |
+|repos |Databricks Repos |Yes |
+|secrets |Databricks Secrets |No |
+|serverlessRealTimeInference |Databricks Serverless Real-Time Inference |Yes |
+|sqlanalytics |Databricks SQL Analytics |Yes |
+|sqlPermissions |Databricks SQLPermissions |No |
+|ssh |Databricks SSH |No |
+|unityCatalog |Databricks Unity Catalog |Yes |
+|webTerminal |Databricks Web Terminal |Yes |
+|workspace |Databricks Workspace |No |
+
+## Microsoft.DataCollaboration/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CollaborationAudit |Collaboration Audit |Yes |
+|Computations |Computations |Yes |
+|DataAssets |Data Assets |No |
+|Pipelines |Pipelines |No |
+|Pipelines |Pipelines |No |
+|Proposals |Proposals |No |
+|Scripts |Scripts |No |
+
+## Microsoft.DataFactory/factories
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ActivityRuns |Pipeline activity runs log |No |
+|AirflowDagProcessingLogs |Airflow dag processing logs |Yes |
+|AirflowSchedulerLogs |Airflow scheduler logs |Yes |
+|AirflowTaskLogs |Airflow task execution logs |Yes |
+|AirflowWebLogs |Airflow web logs |Yes |
+|AirflowWorkerLogs |Airflow worker logs |Yes |
+|PipelineRuns |Pipeline runs log |No |
+|SandboxActivityRuns |Sandbox Activity runs log |Yes |
+|SandboxPipelineRuns |Sandbox Pipeline runs log |Yes |
+|SSISIntegrationRuntimeLogs |SSIS integration runtime logs |No |
+|SSISPackageEventMessageContext |SSIS package event message context |No |
+|SSISPackageEventMessages |SSIS package event messages |No |
+|SSISPackageExecutableStatistics |SSIS package executable statistics |No |
+|SSISPackageExecutionComponentPhases |SSIS package execution component phases |No |
+|SSISPackageExecutionDataStatistics |SSIS package exeution data statistics |No |
+|TriggerRuns |Trigger runs log |No |
+
+## Microsoft.DataLakeAnalytics/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|ConfigurationChange |Configuration Change Event Logs |Yes |
+|JobEvent |Job Event Logs |Yes |
+|JobInfo |Job Info Logs |Yes |
+|Requests |Request Logs |No |
+
+## Microsoft.DataLakeStore/accounts
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |No |
+|Requests |Request Logs |No |
+
+## Microsoft.DataProtection/BackupVaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AddonAzureBackupJobs |Addon Azure Backup Job Data |Yes |
+|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |Yes |
+|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |Yes |
+|CoreAzureBackup |Core Azure Backup Data |Yes |
+
+## Microsoft.DataShare/accounts
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ReceivedShareSnapshots |Received Share Snapshots |No |
+|SentShareSnapshots |Sent Share Snapshots |No |
+|Shares |Shares |No |
+|ShareSubscriptions |Share Subscriptions |No |
+
+## Microsoft.DBforMariaDB/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MariaDB Audit Logs |No |
+|MySqlSlowLogs |MariaDB Server Logs |No |
+
+## Microsoft.DBforMySQL/flexibleServers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MySQL Audit Logs |No |
+|MySqlSlowLogs |MySQL Slow Logs |No |
+
+## Microsoft.DBforMySQL/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|MySqlAuditLogs |MySQL Audit Logs |No |
+|MySqlSlowLogs |MySQL Server Logs |No |
+
+## Microsoft.DBforPostgreSQL/flexibleServers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLFlexDatabaseXacts |PostgreSQL remaining transactions |Yes |
+|PostgreSQLFlexQueryStoreRuntime |PostgreSQL Query Store Runtime |Yes |
+|PostgreSQLFlexQueryStoreWaitStats |PostgreSQL Query Store Wait Statistics |Yes |
+|PostgreSQLFlexSessions |PostgreSQL Sessions data |Yes |
+|PostgreSQLFlexTableStats |PostgreSQL Autovacuum and schema statistics |Yes |
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+
+## Microsoft.DBForPostgreSQL/serverGroupsv2
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |Yes |
+
+## Microsoft.DBforPostgreSQL/servers
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+|QueryStoreRuntimeStatistics |PostgreSQL Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |PostgreSQL Query Store Wait Statistics |No |
+
+## Microsoft.DBforPostgreSQL/serversv2
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PostgreSQLLogs |PostgreSQL Server Logs |No |
+
+## Microsoft.DesktopVirtualization/applicationgroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Checkpoint |Checkpoint |No |
+|Error |Error |No |
+|Management |Management |No |
+
+## Microsoft.DesktopVirtualization/hostpools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AgentHealthStatus |AgentHealthStatus |No |
+|AutoscaleEvaluationPooled |Do not use - internal testing |Yes |
+|Checkpoint |Checkpoint |No |
+|Connection |Connection |No |
+|ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
+|Error |Error |No |
+|HostRegistration |HostRegistration |No |
+|Management |Management |No |
+|NetworkData |Network Data Logs |Yes |
+|SessionHostManagement |Session Host Management Activity Logs |Yes |
+
+## Microsoft.DesktopVirtualization/scalingplans
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Autoscale |Autoscale logs |Yes |
+
+## Microsoft.DesktopVirtualization/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Checkpoint |Checkpoint |No |
+|Error |Error |No |
+|Feed |Feed |No |
+|Management |Management |No |
+
+## Microsoft.DevCenter/devcenters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataplaneAuditEvent |Dataplane audit logs |Yes |
+
+## Microsoft.Devices/IotHubs
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|C2DCommands |C2D Commands |No |
+|C2DTwinOperations |C2D Twin Operations |No |
+|Configurations |Configurations |No |
+|Connections |Connections |No |
+|D2CTwinOperations |D2CTwinOperations |No |
+|DeviceIdentityOperations |Device Identity Operations |No |
+|DeviceStreams |Device Streams (Preview) |No |
+|DeviceTelemetry |Device Telemetry |No |
+|DirectMethods |Direct Methods |No |
+|DistributedTracing |Distributed Tracing (Preview) |No |
+|FileUploadOperations |File Upload Operations |No |
+|JobsOperations |Jobs Operations |No |
+|Routes |Routes |No |
+|TwinQueries |Twin Queries |No |
+
+## Microsoft.Devices/provisioningServices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeviceOperations |Device Operations |No |
+|ServiceOperations |Service Operations |No |
+
+## Microsoft.DigitalTwins/digitalTwinsInstances
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataHistoryOperation |DataHistoryOperation |Yes |
+|DigitalTwinsOperation |DigitalTwinsOperation |No |
+|EventRoutesOperation |EventRoutesOperation |No |
+|ModelsOperation |ModelsOperation |No |
+|QueryOperation |QueryOperation |No |
+|ResourceProviderOperation |ResourceProviderOperation |Yes |
+
+## Microsoft.DocumentDB/cassandraClusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraAudit |CassandraAudit |Yes |
+|CassandraLogs |CassandraLogs |Yes |
+
+## Microsoft.DocumentDB/DatabaseAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CassandraRequests |CassandraRequests |No |
+|ControlPlaneRequests |ControlPlaneRequests |No |
+|DataPlaneRequests |DataPlaneRequests |No |
+|GremlinRequests |GremlinRequests |No |
+|MongoRequests |MongoRequests |No |
+|PartitionKeyRUConsumption |PartitionKeyRUConsumption |No |
+|PartitionKeyStatistics |PartitionKeyStatistics |No |
+|QueryRuntimeStatistics |QueryRuntimeStatistics |No |
+|TableApiRequests |TableApiRequests |Yes |
+
+## Microsoft.EventGrid/domains
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|DeliveryFailures |Delivery Failure Logs |No |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventGrid/partnerNamespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventGrid/partnerTopics
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeliveryFailures |Delivery Failure Logs |No |
+
+## Microsoft.EventGrid/systemTopics
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DeliveryFailures |Delivery Failure Logs |No |
+
+## Microsoft.EventGrid/topics
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataPlaneRequests |Data plane operations logs |Yes |
+|DeliveryFailures |Delivery Failure Logs |No |
+|PublishFailures |Publish Failure Logs |No |
+
+## Microsoft.EventHub/Namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationMetricsLogs |Application Metrics Logs |Yes |
+|ArchiveLogs |Archive Logs |No |
+|AutoScaleLogs |Auto Scale Logs |No |
+|CustomerManagedKeyUserLogs |Customer Managed Key Logs |No |
+|EventHubVNetConnectionEvent |VNet/IP Filtering Connection Logs |No |
+|KafkaCoordinatorLogs |Kafka Coordinator Logs |No |
+|KafkaUserErrorLogs |Kafka User Error Logs |No |
+|OperationalLogs |Operational Logs |No |
+|RuntimeAuditLogs |Runtime Audit Logs |Yes |
+
+## Microsoft.HealthcareApis/services
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs |No |
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/analyticsconnectors
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes |
+
+## Microsoft.HealthcareApis/workspaces/dicomservices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |Audit logs |Yes |
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/fhirservices
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |FHIR Audit logs |Yes |
+
+## Microsoft.HealthcareApis/workspaces/iotconnectors
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DiagnosticLogs |Diagnostic logs |Yes |
+
+## microsoft.insights/autoscalesettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AutoscaleEvaluations |Autoscale Evaluations |No |
+|AutoscaleScaleActions |Autoscale Scale Actions |No |
+
+## microsoft.insights/components
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppAvailabilityResults |Availability results |No |
+|AppBrowserTimings |Browser timings |No |
+|AppDependencies |Dependencies |No |
+|AppEvents |Events |No |
+|AppExceptions |Exceptions |No |
+|AppMetrics |Metrics |No |
+|AppPageViews |Page views |No |
+|AppPerformanceCounters |Performance counters |No |
+|AppRequests |Requests |No |
+|AppSystemEvents |System events |No |
+|AppTraces |Traces |No |
+
+## Microsoft.Insights/datacollectionrules
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LogErrors |Log Errors |Yes |
+|LogTroubleshooting |Log Troubleshooting |Yes |
+
+## microsoft.keyvault/managedhsms
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |Audit Event |No |
+
+## Microsoft.KeyVault/vaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditEvent |Audit Logs |No |
+|AzurePolicyEvaluationDetails |Azure Policy Evaluation Details |Yes |
+
+## Microsoft.Kusto/clusters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command |Command |No |
+|FailedIngestion |Failed ingestion |No |
+|IngestionBatching |Ingestion batching |No |
+|Journal |Journal |Yes |
+|Query |Query |No |
+|SucceededIngestion |Succeeded ingestion |No |
+|TableDetails |Table details |No |
+|TableUsageStatistics |Table usage statistics |No |
+
+## microsoft.loadtestservice/loadtests
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationLogs |Azure Load Testing Operations |Yes |
+
+## Microsoft.Logic/IntegrationAccounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|IntegrationAccountTrackingEvents |Integration Account track events |No |
+
+## Microsoft.Logic/Workflows
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|WorkflowRuntime |Workflow runtime diagnostic events |No |
+
+## Microsoft.MachineLearningServices/registries
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|RegistryAssetReadEvent |Registry Asset Read Event |Yes |
+|RegistryAssetWriteEvent |Registry Asset Write Event |Yes |
+
+## Microsoft.MachineLearningServices/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AmlComputeClusterEvent |AmlComputeClusterEvent |No |
+|AmlComputeClusterNodeEvent |AmlComputeClusterNodeEvent |Yes |
+|AmlComputeCpuGpuUtilization |AmlComputeCpuGpuUtilization |No |
+|AmlComputeJobEvent |AmlComputeJobEvent |No |
+|AmlRunStatusChangedEvent |AmlRunStatusChangedEvent |No |
+|ComputeInstanceEvent |ComputeInstanceEvent |Yes |
+|DataLabelChangeEvent |DataLabelChangeEvent |Yes |
+|DataLabelReadEvent |DataLabelReadEvent |Yes |
+|DataSetChangeEvent |DataSetChangeEvent |Yes |
+|DataSetReadEvent |DataSetReadEvent |Yes |
+|DataStoreChangeEvent |DataStoreChangeEvent |Yes |
+|DataStoreReadEvent |DataStoreReadEvent |Yes |
+|DeploymentEventACI |DeploymentEventACI |Yes |
+|DeploymentEventAKS |DeploymentEventAKS |Yes |
+|DeploymentReadEvent |DeploymentReadEvent |Yes |
+|EnvironmentChangeEvent |EnvironmentChangeEvent |Yes |
+|EnvironmentReadEvent |EnvironmentReadEvent |Yes |
+|InferencingOperationACI |InferencingOperationACI |Yes |
+|InferencingOperationAKS |InferencingOperationAKS |Yes |
+|ModelsActionEvent |ModelsActionEvent |Yes |
+|ModelsChangeEvent |ModelsChangeEvent |Yes |
+|ModelsReadEvent |ModelsReadEvent |Yes |
+|PipelineChangeEvent |PipelineChangeEvent |Yes |
+|PipelineReadEvent |PipelineReadEvent |Yes |
+|RunEvent |RunEvent |Yes |
+|RunReadEvent |RunReadEvent |Yes |
+
+## Microsoft.MachineLearningServices/workspaces/onlineEndpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AmlOnlineEndpointConsoleLog |AmlOnlineEndpointConsoleLog |Yes |
+|AmlOnlineEndpointEventLog |AmlOnlineEndpointEventLog (preview) |Yes |
+|AmlOnlineEndpointTrafficLog |AmlOnlineEndpointTrafficLog (preview) |Yes |
+
+## Microsoft.ManagedNetworkFabric/networkDevices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppAvailabilityResults |Availability results |Yes |
+|AppBrowserTimings |Browser timings |Yes |
+
+## Microsoft.Media/mediaservices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|KeyDeliveryRequests |Key Delivery Requests |No |
+|MediaAccount |Media Account Health Status |Yes |
+
+## Microsoft.Media/mediaservices/liveEvents
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LiveEventState |Live Event Operations |Yes |
+
+## Microsoft.Media/mediaservices/streamingEndpoints
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StreamingEndpointRequests |Streaming Endpoint Requests |Yes |
+
+## Microsoft.Media/videoanalyzers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit Logs |Yes |
+|Diagnostics |Diagnostics Logs |Yes |
+|Operational |Operational Logs |Yes |
+
+## Microsoft.NetApp/netAppAccounts/capacityPools
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Autoscale |Capacity Pool Autoscaled |Yes |
+
+## Microsoft.NetApp/netAppAccounts/capacityPools/volumes
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ANFFileAccess |ANF File Access |Yes |
+
+## Microsoft.Network/applicationgateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationGatewayAccessLog |Application Gateway Access Log |No |
+|ApplicationGatewayFirewallLog |Application Gateway Firewall Log |No |
+|ApplicationGatewayPerformanceLog |Application Gateway Performance Log |No |
+
+## Microsoft.Network/azureFirewalls
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AZFWApplicationRule |Azure Firewall Application Rule |Yes |
+|AZFWApplicationRuleAggregation |Azure Firewall Network Rule Aggregation (Policy Analytics) |Yes |
+|AZFWDnsQuery |Azure Firewall DNS query |Yes |
+|AZFWFatFlow |Azure Firewall Fat Flow Log |Yes |
+|AZFWFlowTrace |Azure Firewall Flow Trace Log |Yes |
+|AZFWFqdnResolveFailure |Azure Firewall FQDN Resolution Failure |Yes |
+|AZFWIdpsSignature |Azure Firewall IDPS Signature |Yes |
+|AZFWNatRule |Azure Firewall Nat Rule |Yes |
+|AZFWNatRuleAggregation |Azure Firewall Nat Rule Aggregation (Policy Analytics) |Yes |
+|AZFWNetworkRule |Azure Firewall Network Rule |Yes |
+|AZFWNetworkRuleAggregation |Azure Firewall Application Rule Aggregation (Policy Analytics) |Yes |
+|AZFWThreatIntel |Azure Firewall Threat Intelligence |Yes |
+|AzureFirewallApplicationRule |Azure Firewall Application Rule (Legacy Azure Diagnostics) |No |
+|AzureFirewallDnsProxy |Azure Firewall DNS Proxy (Legacy Azure Diagnostics) |No |
+|AzureFirewallNetworkRule |Azure Firewall Network Rule (Legacy Azure Diagnostics) |No |
+
+## microsoft.network/bastionHosts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BastionAuditLogs |Bastion Audit Logs |No |
+
+## Microsoft.Network/expressRouteCircuits
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|PeeringRouteLog |Peering Route Table Logs |No |
+
+## Microsoft.Network/frontdoors
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|FrontdoorAccessLog |Frontdoor Access Log |No |
+|FrontdoorWebApplicationFirewallLog |Frontdoor Web Application Firewall Log |No |
+
+## Microsoft.Network/loadBalancers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|LoadBalancerAlertEvent |Load Balancer Alert Events |No |
+|LoadBalancerProbeHealthStatus |Load Balancer Probe Health Status |No |
+
+## Microsoft.Network/networkManagers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NetworkGroupMembershipChange |Network Group Membership Change |Yes |
+
+## Microsoft.Network/networksecuritygroups
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NetworkSecurityGroupEvent |Network Security Group Event |No |
+|NetworkSecurityGroupFlowEvent |Network Security Group Rule Flow Event |No |
+|NetworkSecurityGroupRuleCounter |Network Security Group Rule Counter |No |
+
+## Microsoft.Network/networkSecurityPerimeters
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NspCrossPerimeterInboundAllowed |Cross perimeter inbound access allowed by perimeter link. |Yes |
+|NspCrossPerimeterOutboundAllowed |Cross perimeter outbound access allowed by perimeter link. |Yes |
+|NspIntraPerimeterInboundAllowed |Inbound access allowed within same perimeter. |Yes |
+|NspIntraPerimeterOutboundAllowed |Outbound attempted to same perimeter. NOTE: To be deprecated in future. |Yes |
+|NspOutboundAttempt |Outbound attempted to same or different perimeter. |Yes |
+|NspPrivateInboundAllowed |Private endpoint traffic allowed. |Yes |
+|NspPublicInboundPerimeterRulesAllowed |Public inbound access allowed by NSP access rules. |Yes |
+|NspPublicInboundPerimeterRulesDenied |Public inbound access denied by NSP access rules. |Yes |
+|NspPublicInboundResourceRulesAllowed |Public inbound access allowed by PaaS resource rules. |Yes |
+|NspPublicInboundResourceRulesDenied |Public inbound access denied by PaaS resource rules. |Yes |
+|NspPublicOutboundPerimeterRulesAllowed |Public outbound access allowed by NSP access rules. |Yes |
+|NspPublicOutboundPerimeterRulesDenied |Public outbound access denied by NSP access rules. |Yes |
+|NspPublicOutboundResourceRulesAllowed |Public outbound access allowed by PaaS resource rules. |Yes |
+|NspPublicOutboundResourceRulesDenied |Public outbound access denied by PaaS resource rules |Yes |
+
+## Microsoft.Network/networkSecurityPerimeters/profiles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|NSPInboundAccessAllowed |NSP Inbound Access Allowed. |Yes |
+|NSPInboundAccessDenied |NSP Inbound Access Denied. |Yes |
+|NSPOutboundAccessAllowed |NSP Outbound Access Allowed. |Yes |
+|NSPOutboundAccessDenied |NSP Outbound Access Denied. |Yes |
+
+## microsoft.network/p2svpngateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|P2SDiagnosticLog |P2S Diagnostic Logs |No |
+
+## Microsoft.Network/publicIPAddresses
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DDoSMitigationFlowLogs |Flow logs of DDoS mitigation decisions |No |
+|DDoSMitigationReports |Reports of DDoS mitigations |No |
+|DDoSProtectionNotifications |DDoS protection notifications |No |
+
+## Microsoft.Network/trafficManagerProfiles
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ProbeHealthStatusEvents |Traffic Manager Probe Health Results Event |No |
+
+## microsoft.network/virtualnetworkgateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|P2SDiagnosticLog |P2S Diagnostic Logs |No |
+|RouteDiagnosticLog |Route Diagnostic Logs |No |
+|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
+
+## Microsoft.Network/virtualNetworks
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|VMProtectionAlerts |VM protection alerts |No |
+
+## microsoft.network/vpngateways
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|GatewayDiagnosticLog |Gateway Diagnostic Logs |No |
+|IKEDiagnosticLog |IKE Diagnostic Logs |No |
+|RouteDiagnosticLog |Route Diagnostic Logs |No |
+|TunnelDiagnosticLog |Tunnel Diagnostic Logs |No |
+
+## Microsoft.NetworkFunction/azureTrafficCollectors
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ExpressRouteCircuitIpfix |Express Route Circuit IPFIX Flow Records |Yes |
+
+## Microsoft.NotificationHubs/namespaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationalLogs |Operational Logs |No |
+
+## MICROSOFT.OPENENERGYPLATFORM/ENERGYSERVICES
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AirFlowTaskLogs |Air Flow Task Logs |Yes |
+|ElasticOperatorLogs |Elastic Operator Logs |Yes |
+|ElasticsearchLogs |Elasticsearch Logs |Yes |
+
+## Microsoft.OpenLogisticsPlatform/Workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|SupplyChainEntityOperations |Supply Chain Entity Operations |Yes |
+|SupplyChainEventLogs |Supply Chain Event logs |Yes |
+
+## Microsoft.OperationalInsights/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |No |
+
+## Microsoft.PlayFab/titles
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AuditLogs |AuditLogs |Yes |
+
+## Microsoft.PowerBI/tenants
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## Microsoft.PowerBI/tenants/workspaces
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## Microsoft.PowerBIDedicated/capacities
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Engine |Engine |No |
+
+## microsoft.purview/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DataSensitivityLogEvent |DataSensitivity |Yes |
+|ScanStatusLogEvent |ScanStatus |No |
+|Security |PurviewAccountAuditEvents |Yes |
+
+## Microsoft.RecoveryServices/Vaults
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AddonAzureBackupAlerts |Addon Azure Backup Alert Data |No |
+|AddonAzureBackupJobs |Addon Azure Backup Job Data |No |
+|AddonAzureBackupPolicy |Addon Azure Backup Policy Data |No |
+|AddonAzureBackupProtectedInstance |Addon Azure Backup Protected Instance Data |No |
+|AddonAzureBackupStorage |Addon Azure Backup Storage Data |No |
+|ASRReplicatedItems |Azure Site Recovery Replicated Items Details |Yes |
+|AzureBackupReport |Azure Backup Reporting Data |No |
+|AzureSiteRecoveryEvents |Azure Site Recovery Events |No |
+|AzureSiteRecoveryJobs |Azure Site Recovery Jobs |No |
+|AzureSiteRecoveryProtectedDiskDataChurn |Azure Site Recovery Protected Disk Data Churn |No |
+|AzureSiteRecoveryRecoveryPoints |Azure Site Recovery Recovery Points |No |
+|AzureSiteRecoveryReplicatedItems |Azure Site Recovery Replicated Items |No |
+|AzureSiteRecoveryReplicationDataUploadRate |Azure Site Recovery Replication Data Upload Rate |No |
+|AzureSiteRecoveryReplicationStats |Azure Site Recovery Replication Stats |No |
+|CoreAzureBackup |Core Azure Backup Data |No |
+
+## Microsoft.Relay/namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|HybridConnectionsEvent |HybridConnections Events |No |
+|HybridConnectionsLogs |HybridConnectionsLogs |Yes |
+
+## Microsoft.Search/searchServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|OperationLogs |Operation Logs |No |
+
+## Microsoft.Security/antiMalwareSettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults |AntimalwareScanResults |Yes |
+
+## Microsoft.Security/defenderForStorageSettings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScanResults |AntimalwareScanResults |Yes |
+
+## microsoft.securityinsights/settings
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Analytics |Analytics |Yes |
+|Automation |Automation |Yes |
+|DataConnectors |Data Collection - Connectors |Yes |
+
+## Microsoft.ServiceBus/Namespaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ApplicationMetricsLogs |Application Metrics Logs(Unused) |Yes |
+|OperationalLogs |Operational Logs |No |
+|RuntimeAuditLogs |Runtime Audit Logs |Yes |
+|VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |No |
+
+## Microsoft.SignalRService/SignalR
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AllLogs |Azure SignalR Service Logs. |No |
+
+## Microsoft.SignalRService/WebPubSub
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ConnectivityLogs |Connectivity logs for Azure Web PubSub Service. |Yes |
+|HttpRequestLogs |Http Request logs for Azure Web PubSub Service. |Yes |
+|MessagingLogs |Messaging logs for Azure Web PubSub Service. |Yes |
+
+## microsoft.singularity/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Activity |Activity Logs |Yes |
+|Execution |Execution Logs |Yes |
+
+## Microsoft.Sql/managedInstances
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DevOpsOperationsAudit |Devops operations Audit Logs |No |
+|ResourceUsageStats |Resource Usage Statistics |No |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+
+## Microsoft.Sql/managedInstances/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Errors |Errors |No |
+|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
+|SQLInsights |SQL Insights |No |
+
+## Microsoft.Sql/servers/databases
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AutomaticTuning |Automatic tuning |No |
+|Blocks |Blocks |No |
+|DatabaseWaitStatistics |Database Wait Statistics |No |
+|Deadlocks |Deadlocks |No |
+|DevOpsOperationsAudit |Devops operations Audit Logs |No |
+|DmsWorkers |Dms Workers |No |
+|Errors |Errors |No |
+|ExecRequests |Exec Requests |No |
+|QueryStoreRuntimeStatistics |Query Store Runtime Statistics |No |
+|QueryStoreWaitStatistics |Query Store Wait Statistics |No |
+|RequestSteps |Request Steps |No |
+|SQLInsights |SQL Insights |No |
+|SqlRequests |Sql Requests |No |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+|Timeouts |Timeouts |No |
+|Waits |Waits |No |
+
+## Microsoft.Storage/storageAccounts/blobServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/fileServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/queueServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.Storage/storageAccounts/tableServices
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|StorageDelete |StorageDelete |Yes |
+|StorageRead |StorageRead |Yes |
+|StorageWrite |StorageWrite |Yes |
+
+## Microsoft.StorageCache/caches
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AscCacheOperationEvent |HPC Cache operation event |Yes |
+|AscUpgradeEvent |HPC Cache upgrade event |Yes |
+|AscWarningEvent |HPC Cache warning |Yes |
+
+## Microsoft.StorageMover/storageMovers
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|CopyLogsFailed |Copy logs - Failed |Yes |
+|JobRunLogs |Job run logs |Yes |
+
+## Microsoft.StreamAnalytics/streamingjobs
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Authoring |Authoring |No |
+|Execution |Execution |No |
+
+## Microsoft.Synapse/workspaces
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BuiltinSqlReqsEnded |Built-in Sql Pool Requests Ended |No |
+|GatewayApiRequests |Synapse Gateway Api Requests |No |
+|IntegrationActivityRuns |Integration Activity Runs |Yes |
+|IntegrationPipelineRuns |Integration Pipeline Runs |Yes |
+|IntegrationTriggerRuns |Integration Trigger Runs |Yes |
+|SQLSecurityAuditEvents |SQL Security Audit Event |No |
+|SynapseLinkEvent |Synapse Link Event |Yes |
+|SynapseRbacOperations |Synapse RBAC Operations |No |
+
+## Microsoft.Synapse/workspaces/bigDataPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|BigDataPoolAppEvents |Big Data Pool Applications Execution Metrics |Yes |
+|BigDataPoolAppsEnded |Big Data Pool Applications Ended |No |
+|BigDataPoolBlockManagerEvents |Big Data Pool Block Manager Events |Yes |
+|BigDataPoolDriverLogs |Big Data Pool Driver Logs |Yes |
+|BigDataPoolEnvironmentEvents |Big Data Pool Environment Events |Yes |
+|BigDataPoolExecutorEvents |Big Data Pool Executor Events |Yes |
+|BigDataPoolExecutorLogs |Big Data Pool Executor Logs |Yes |
+|BigDataPoolJobEvents |Big Data Pool Job Events |Yes |
+|BigDataPoolSqlExecutionEvents |Big Data Pool Sql Execution Events |Yes |
+|BigDataPoolStageEvents |Big Data Pool Stage Events |Yes |
+|BigDataPoolTaskEvents |Big Data Pool Task Events |Yes |
+
+## Microsoft.Synapse/workspaces/kustoPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Command |Synapse Data Explorer Command |Yes |
+|FailedIngestion |Synapse Data Explorer Failed Ingestion |Yes |
+|IngestionBatching |Synapse Data Explorer Ingestion Batching |Yes |
+|Query |Synapse Data Explorer Query |Yes |
+|SucceededIngestion |Synapse Data Explorer Succeeded Ingestion |Yes |
+|TableDetails |Synapse Data Explorer Table Details |Yes |
+|TableUsageStatistics |Synapse Data Explorer Table Usage Statistics |Yes |
+
+## Microsoft.Synapse/workspaces/scopePools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ScopePoolScopeJobsEnded |Scope Pool Scope Jobs Ended |Yes |
+|ScopePoolScopeJobsStateChange |Scope Pool Scope Jobs State Change |Yes |
+
+## Microsoft.Synapse/workspaces/sqlPools
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|DmsWorkers |Dms Workers |No |
+|ExecRequests |Exec Requests |No |
+|RequestSteps |Request Steps |No |
+|SqlRequests |Sql Requests |No |
+|SQLSecurityAuditEvents |Sql Security Audit Event |No |
+|Waits |Waits |No |
+
+## Microsoft.TimeSeriesInsights/environments
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Ingress |Ingress |No |
+|Management |Management |No |
+
+## Microsoft.TimeSeriesInsights/environments/eventsources
+<!-- Data source : arm-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Ingress |Ingress |No |
+|Management |Management |No |
+
+## microsoft.videoindexer/accounts
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|Audit |Audit |Yes |
+|IndexingLogs |Indexing Logs |Yes |
+
+## Microsoft.Web/hostingEnvironments
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceEnvironmentPlatformLogs |App Service Environment Platform Logs |No |
+
+## Microsoft.Web/sites
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
+|AppServiceAppLogs |App Service Application Logs |No |
+|AppServiceAuditLogs |Access Audit Logs |No |
+|AppServiceConsoleLogs |App Service Console Logs |No |
+|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
+|AppServiceHTTPLogs |HTTP logs |No |
+|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
+|AppServicePlatformLogs |App Service Platform logs |No |
+|FunctionAppLogs |Function Application Logs |No |
+|WorkflowRuntime |Workflow Runtime Logs |Yes |
+
+## Microsoft.Web/sites/slots
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|AppServiceAntivirusScanAuditLogs |Report Antivirus Audit Logs |No |
+|AppServiceAppLogs |App Service Application Logs |No |
+|AppServiceAuditLogs |Access Audit Logs |No |
+|AppServiceConsoleLogs |App Service Console Logs |No |
+|AppServiceFileAuditLogs |Site Content Change Audit Logs |No |
+|AppServiceHTTPLogs |HTTP logs |No |
+|AppServiceIPSecAuditLogs |IPSecurity Audit logs |No |
+|AppServicePlatformLogs |App Service Platform logs |No |
+|FunctionAppLogs |Function Application Logs |No |
+
+## microsoft.workloads/sapvirtualinstances
+<!-- Data source : naam-->
+
+|Category|Category Display Name|Costs To Export|
+||||
+|ChangeDetection |Change Detection |Yes |
++
+## Next Steps
+
+* [Learn more about resource logs](../essentials/platform-logs-overview.md)
+* [Stream resource resource logs to **Event Hubs**](./resource-logs.md#send-to-azure-event-hubs)
+* [Change resource log diagnostic settings using the Azure Monitor REST API](/rest/api/monitor/diagnosticsettings)
+* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
++
+<!--Gen Date: Thu Apr 13 2023 22:24:40 GMT+0300 (Israel Daylight Time)-->
azure-monitor App Insights Azure Ad Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/app-insights-azure-ad-api.md
# Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication
-You can submit a query request to a workspace by using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
-
->[!Note]
-> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. The `api.loganalytics.io` endpoint will continue to be supported for the forseeable future.
-
-## Authenticate with a demo API key
-
-To quickly explore the API without Azure AD authentication, use the demonstration workspace with sample data, which supports API key authentication.
-
-To authenticate and run queries against the sample workspace, use `DEMO_WORKSPACE` as the {workspace-id} and pass in the API key `DEMO_KEY`.
-
-If either the Application ID or the API key is incorrect, the API service returns a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
-
-The API key `DEMO_KEY` can be passed in three different ways, depending on whether you want to use a header, the URL, or basic authentication:
--- **Custom header**: Provide the API key in the custom header `X-Api-Key`.-- **Query parameter**: Provide the API key in the URL parameter `api_key`.-- **Basic authentication**: Provide the API key as either username or password. If you provide both, the API key must be in the username.-
-This example uses the workspace ID and API key in the header:
-
-```
- POST https://api.loganalytics.azure.com/v1/workspaces/DEMO_WORKSPACE/query
- X-Api-Key: DEMO_KEY
- Content-Type: application/json
-
- {
- "query": "AzureActivity | summarize count() by Category"
- }
-```
-
-## Public API endpoint
-
-The public API endpoint is:
-
-```
- https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId}
-```
-where:
-
-The query is passed in the request body.
-
-For example:
- ```
- https://api.loganalytics.azure.com/v1/workspaces/1234abcd-def89-765a-9abc-def1234abcde
-
- Body:
- {
- "query": "Usage"
- }
-```
+You can submit a query request by using the Azure Monitor Application Insights endpoint `https://api.applicationinsights.io`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
## Set up authentication
To access the API, you register a client app with Azure AD and request a token.
1. On the app's overview page, select **API permissions**. 1. Select **Add a permission**.
-1. On the **APIs my organization uses** tab, search for **Log Analytics** and select **Log Analytics API** from the list.
-
- :::image type="content" source="../media/api-register-app/request-api-permissions.png" alt-text="A screenshot that shows the Request API permissions page.":::
+1. On the **APIs my organization uses** tab, search for **Application Insights** and select **Application Insights API** from the list.
1. Select **Delegated permissions**. 1. Select the **Data.Read** checkbox. 1. Select **Add permissions**.
- :::image type="content" source="../media/api-register-app/add-requested-permissions.png" alt-text="A screenshot that shows the continuation of the Request API permissions page.":::
+Now that your app is registered and has permissions to use the API, grant your app access to your Application Insights resource.
-Now that your app is registered and has permissions to use the API, grant your app access to your Log Analytics workspace.
-
-1. From your **Log Analytics workspace** overview page, select **Access control (IAM)**.
+1. From your **Application Insights resource** overview page, select **Access control (IAM)**.
1. Select **Add role assignment**.
- :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot that shows the Access control page for a Log Analytics workspace.":::
- 1. Select the **Reader** role and then select **Members**.
- :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot that shows the Add role assignment page for a Log Analytics workspace.":::
- 1. On the **Members** tab, choose **Select members**. 1. Enter the name of your app in the **Select** box. 1. Select your app and choose **Select**. 1. Select **Review + assign**.
- :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot that shows the Select members pane on the Add role assignment page for a Log Analytics workspace.":::
-
-1. After you finish the Active Directory setup and workspace permissions, request an authorization token.
+1. After you finish the Active Directory setup and permissions, request an authorization token.
>[!Note]
-> For this example, we applied the Reader role. This role is one of many built-in roles and might include more permissions than you require. More granular roles and permissions can be created. For more information, see [Manage access to Log Analytics workspaces](../../logs/manage-access.md).
+> For this example, we applied the Reader role. This role is one of many built-in roles and might include more permissions than you require. More granular roles and permissions can be created.
## Request an authorization token Before you begin, make sure you have all the values required to make the request successfully. All requests require: - Your Azure AD tenant ID.-- Your workspace ID.
+- Your App Insights App ID - If you are currently using API Keys, this is the same app id.
- Your Azure AD client ID for the app. - An Azure AD client secret for the app.
-The Log Analytics API supports Azure AD authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
+The Application Insights API supports Azure AD authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
- Client credentials - Authorization code - Implicit ### Client credentials flow
-In the client credentials flow, the token is used with the Log Analytics endpoint. A single request is made to receive a token by using the credentials provided for your app in the previous step when you [register an app in Azure AD](./register-app-for-token.md).
+In the client credentials flow, the token is used with the Application Insights endpoint. A single request is made to receive a token by using the credentials provided for your app in the previous step when you [register an app in Azure AD](./register-app-for-token.md).
-Use the `https://api.loganalytics.azure.com` endpoint.
+Use the `https://api.applicationinsights.io` endpoint.
#### Client credentials token URL (POST request)
Use the `https://api.loganalytics.azure.com` endpoint.
grant_type=client_credentials &client_id=<app-client-id>
- &resource=https://api.loganalytics.io
+ &resource=https://api.applicationinsights.io
&client_secret=<app-client-secret> ```
A successful request receives an access token in the response:
} ```
-Use the token in requests to the Log Analytics endpoint:
+Use the token in requests to the Application Insights endpoint:
```http
- POST /v1/workspaces/your workspace id/query?timespan=P1D
- Host: https://api.loganalytics.azure.com
+ POST /v1/apps/yous_app_id/query?timespan=P1D
+ Host: https://api.applicationinsights.io
Content-Type: application/json Authorization: bearer <your access token> Body: {
- "query": "AzureActivity |summarize count() by Category"
+ "query": "requests | take 10"
} ``` Example response:
-```http
+```{
+ "tables": [
{
- "tables": [
- {
- "name": "PrimaryResult",
- "columns": [
- {
- "name": "OperationName",
- "type": "string"
- },
- {
- "name": "Level",
- "type": "string"
- },
- {
- "name": "ActivityStatus",
- "type": "string"
- }
- ],
- "rows": [
- [
- "Metric Alert",
- "Informational",
- "Resolved",
- ...
- ],
- ...
- ]
- },
- ...
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "timestamp",
+ "type": "datetime"
+ },
+ {
+ "name": "id",
+ "type": "string"
+ },
+ {
+ "name": "source",
+ "type": "string"
+ },
+ {
+ "name": "name",
+ "type": "string"
+ },
+ {
+ "name": "url",
+ "type": "string"
+ },
+ {
+ "name": "success",
+ "type": "string"
+ },
+ {
+ "name": "resultCode",
+ "type": "string"
+ },
+ {
+ "name": "duration",
+ "type": "real"
+ },
+ {
+ "name": "performanceBucket",
+ "type": "string"
+ },
+ {
+ "name": "customDimensions",
+ "type": "dynamic"
+ },
+ {
+ "name": "customMeasurements",
+ "type": "dynamic"
+ },
+ {
+ "name": "operation_Name",
+ "type": "string"
+ },
+ {
+ "name": "operation_Id",
+ "type": "string"
+ },
+ {
+ "name": "operation_ParentId",
+ "type": "string"
+ },
+ {
+ "name": "operation_SyntheticSource",
+ "type": "string"
+ },
+ {
+ "name": "session_Id",
+ "type": "string"
+ },
+ {
+ "name": "user_Id",
+ "type": "string"
+ },
+ {
+ "name": "user_AuthenticatedId",
+ "type": "string"
+ },
+ {
+ "name": "user_AccountId",
+ "type": "string"
+ },
+ {
+ "name": "application_Version",
+ "type": "string"
+ },
+ {
+ "name": "client_Type",
+ "type": "string"
+ },
+ {
+ "name": "client_Model",
+ "type": "string"
+ },
+ {
+ "name": "client_OS",
+ "type": "string"
+ },
+ {
+ "name": "client_IP",
+ "type": "string"
+ },
+ {
+ "name": "client_City",
+ "type": "string"
+ },
+ {
+ "name": "client_StateOrProvince",
+ "type": "string"
+ },
+ {
+ "name": "client_CountryOrRegion",
+ "type": "string"
+ },
+ {
+ "name": "client_Browser",
+ "type": "string"
+ },
+ {
+ "name": "cloud_RoleName",
+ "type": "string"
+ },
+ {
+ "name": "cloud_RoleInstance",
+ "type": "string"
+ },
+ {
+ "name": "appId",
+ "type": "string"
+ },
+ {
+ "name": "appName",
+ "type": "string"
+ },
+ {
+ "name": "iKey",
+ "type": "string"
+ },
+ {
+ "name": "sdkVersion",
+ "type": "string"
+ },
+ {
+ "name": "itemId",
+ "type": "string"
+ },
+ {
+ "name": "itemType",
+ "type": "string"
+ },
+ {
+ "name": "itemCount",
+ "type": "int"
+ }
+ ],
+ "rows": [
+ [
+ "2018-02-01T17:33:09.788Z",
+ "|0qRud6jz3k0=.c32c2659_",
+ null,
+ "GET Reports/Index",
+ "http://fabrikamfiberapp.azurewebsites.net/Reports",
+ "True",
+ "200",
+ "3.3833",
+ "<250ms",
+ "{\"_MS.ProcessedByMetricExtractors\":\"(Name:'Requests', Ver:'1.0')\"}",
+ null,
+ "GET Reports/Index",
+ "0qRud6jz3k0=",
+ "0qRud6jz3k0=",
+ "Application Insights Availability Monitoring",
+ "9fc6738d-7e26-44f0-b88e-6fae8ccb6b26",
+ "us-va-ash-azr_9fc6738d-7e26-44f0-b88e-6fae8ccb6b26",
+ null,
+ null,
+ "AutoGen_49c3aea0-4641-4675-93b5-55f7a62d22d3",
+ "PC",
+ null,
+ null,
+ "52.168.8.0",
+ "Boydton",
+ "Virginia",
+ "United States",
+ null,
+ "fabrikamfiberapp",
+ "RD00155D5053D1",
+ "cf58dcfd-0683-487c-bc84-048789bca8e5",
+ "fabrikamprod",
+ "5a2e4e0c-e136-4a15-9824-90ba859b0a89",
+ "web:2.5.0-33031",
+ "051ad4ef-0776-11e8-ac6e-e30599af6943",
+ "request",
+ "1"
+ ],
+ [
+ "2018-02-01T17:33:15.786Z",
+ "|x/Ysh+M1TfU=.c32c265a_",
+ null,
+ "GET Home/Index",
+ "http://fabrikamfiberapp.azurewebsites.net/",
+ "True",
+ "200",
+ "716.2912",
+ "500ms-1sec",
+ "{\"_MS.ProcessedByMetricExtractors\":\"(Name:'Requests', Ver:'1.0')\"}",
+ null,
+ "GET Home/Index",
+ "x/Ysh+M1TfU=",
+ "x/Ysh+M1TfU=",
+ "Application Insights Availability Monitoring",
+ "58b15be6-d1e6-4d89-9919-52f63b840913",
+ "emea-se-sto-edge_58b15be6-d1e6-4d89-9919-52f63b840913",
+ null,
+ null,
+ "AutoGen_49c3aea0-4641-4675-93b5-55f7a62d22d3",
+ "PC",
+ null,
+ null,
+ "51.141.32.0",
+ "Cardiff",
+ "Cardiff",
+ "United Kingdom",
+ null,
+ "fabrikamfiberapp",
+ "RD00155D5053D1",
+ "cf58dcfd-0683-487c-bc84-048789bca8e5",
+ "fabrikamprod",
+ "5a2e4e0c-e136-4a15-9824-90ba859b0a89",
+ "web:2.5.0-33031",
+ "051ad4f0-0776-11e8-ac6e-e30599af6943",
+ "request",
+ "1"
]
+ ]
}
+ ]
+}
``` ### Authorization code flow
-The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, with one endpoint per request. Their formats are described in the following sections.
+The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Application Insights API. There are two URLs, with one endpoint per request. Their formats are described in the following sections.
#### Authorization code URL (GET request)
The main OAuth2 flow supported is through [authorization codes](/azure/active-di
client_id=<app-client-id> &response_type=code &redirect_uri=<app-redirect-uri>
- &resource=https://api.loganalytics.io
+ &resource=https://api.applicationinsights.io
``` When a request is made to the authorize URL, the client\_id is the application ID from your Azure AD app, copied from the app's properties menu. The redirect\_uri is the homepage/login URL from the same Azure AD app. When a request is successful, this endpoint redirects you to the sign-in page you provided at sign-up with the authorization code appended to the URL. See the following example:
At this point, you've obtained an authorization code, which you need now to requ
&client_id=<app client id> &code=<auth code fom GET request> &redirect_uri=<app-client-id>
- &resource=https://api.loganalytics.io
+ &resource=https://api.applicationinsights.io
&client_secret=<app-client-secret> ```
Response example:
"access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax", "expires_in": "3600", "ext_expires_in": "1503641912",
- "id_token": "not_needed_for_log_analytics",
+ "id_token": "not_needed_for_app_insights",
"not_before": "1503638012", "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az",
- "resource": "https://api.loganalytics.io",
+ "resource": "https://api.applicationinsights.io",
"scope": "Data.Read", "token_type": "bearer" } ```
-The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You can also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
+The access token portion of this response is what you present to the Application Insights API in the `Authorization: Bearer` header. You can also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
```http POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
The access token portion of this response is what you present to the Log Analyti
client_id=<app-client-id> &refresh_token=<refresh-token> &grant_type=refresh_token
- &resource=https://api.loganalytics.io
+ &resource=https://api.applicationinsights.io
&client_secret=<app-client-secret> ```
Response example:
"token_type": "Bearer", "expires_in": "3600", "expires_on": "1460404526",
- "resource": "https://api.loganalytics.io",
+ "resource": "https://api.applicationinsights.io",
"access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax", "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az" }
Response example:
### Implicit code flow
-The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required, but no refresh token can be acquired.
+The Application Insights API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required, but no refresh token can be acquired.
#### Implicit code authorize URL
The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-director
client_id=<app-client-id> &response_type=token &redirect_uri=<app-redirect-uri>
- &resource=https://api.loganalytics.io
+ &resource=https://api.applicationinsights.io
``` A successful request produces a redirect to your redirect URI with the token in the URL:
A successful request produces a redirect to your redirect URI with the token in
http://YOUR_REDIRECT_URI/#access_token=YOUR_ACCESS_TOKEN&token_type=Bearer&expires_in=3600&session_state=STATE_GUID ```
-This access\_token can be used as the `Authorization: Bearer` header value when it's passed to the Log Analytics API to authorize requests.
-
-## More information
-
-You can find documentation about OAuth2 with Azure AD here:
-
-## Next steps
--- [Request format](./request-format.md)-- [Response format](./response-format.md)-- [Querying logs for Azure resources](./azure-resource-queries.md)-- [Batch queries](./batch-queries.md)
+This access\_token can be used as the `Authorization: Bearer` header value when it's passed to the Application Insights API to authorize requests.
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
We recommend you review [Limitations and constraints](#limitationsandconstraints
Azure Monitor ensures that all data and saved queries are encrypted at rest using Microsoft-managed keys (MMK). You also have the option to encrypt data with your own key in [Azure Key Vault](../../key-vault/general/overview.md), with control over key lifecycle and ability to revoke access to your data at any time. Azure Monitor use of encryption is identical to the way [Azure Storage encryption](../../storage/common/storage-service-encryption.md#about-azure-storage-service-side-encryption) operates.
-Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data to dedicated clusters is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox-preview) control.
+Customer-managed key is delivered on [dedicated clusters](./logs-dedicated-clusters.md) providing higher protection level and control. Data to dedicated clusters is encrypted twice, once at the service level using Microsoft-managed keys or Customer-managed keys, and once at the infrastructure level, using two different encryption algorithms and two different keys. [double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) protects against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the additional layer of encryption continues to protect your data. Dedicated cluster also allows you to protect your data with [Lockbox](#customer-lockbox) control.
Data ingested in the last 14 days or recently used in queries is kept in hot-cache (SSD-backed) for query efficiency. SSD data is encrypted with Microsoft keys regardless customer-managed key configuration, but your control over SSD access adheres to [key revocation](#key-revocation)
Content-type: application/json
After the configuration, any new alert query will be saved in your storage.
-## Customer Lockbox (preview)
+## Customer Lockbox
Lockbox gives you the control to approve or reject Microsoft engineer request to access your data during a support request.
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
To view your Log Analytics workspace health and set up health status alerts:
The **Enable recommended alert rules** pane opens with a list of recommended alert rules for your Log Analytics workspace.
- :::image type="content" source="../alerts/media/alerts-managing-alert-instances/alerts-enable-recommended-alert-rule-pane.png" alt-text="Screenshot of recommended alert rules pane.":::
+ :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-recommended-alerts.png" alt-text="Screenshot of recommended alert rules pane.":::
1. In the **Alert me if** section, select all of the rules you want to enable. 1. In the **Notify me by** section, select the way you want to be notified if an alert is triggered.
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Linking a Log Analytics workspace to a dedicated cluster in Azure Monitor provid
Capabilities that require dedicated clusters: - **[Customer-managed keys](../logs/customer-managed-keys.md)** - Encrypt cluster data using keys that you provide and control.-- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox-preview)** - Control Microsoft support engineer access requests to your data.
+- **[Lockbox](../logs/customer-managed-keys.md#customer-lockbox)** - Control Microsoft support engineer access requests to your data.
- **[Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption)** - Protect against a scenario where one of the encryption algorithms or keys may be compromised. In this case, the extra layer of encryption continues to protect your data. - **[Cross-query optimization](../logs/cross-workspace-query.md)** - Cross-workspace queries run faster when workspaces are on the same cluster. - **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
Title: Azure Monitor service limits | Microsoft Docs
-description: Lists limits in different areas of Azure Monitor.
+description: This article lists limits in different areas of Azure Monitor.
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits-metrics](../../includes/azure-monitor-limits-metrics.md)]
-## Logs ingestion API
+## Logs Ingestion API
[!INCLUDE [monitoring-limits-custom-logs](../../includes/azure-monitor-limits-custom-logs.md)]
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits-data-collection-rules](../../includes/azure-monitor-limits-data-collection-rules.md)]
-## Diagnostic Settings
+## Diagnostic settings
[!INCLUDE [monitoring-limits-diagnostic-settings](../../includes/azure-monitor-limits-diagnostic-settings.md)] - ## Log queries and language [!INCLUDE [monitoring-limits-log-queries](../../includes/azure-monitor-limits-log-queries.md)]
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits-application-insights](../../includes/application-insights-limits.md)]
-## Next Steps
+## Next steps
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/vmext-troubleshoot.md
If the Microsoft Monitoring Agent VM extension isn't installing or reporting, pe
1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1): * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log`. * If the log doesn't exist, the VM agent isn't installed.
- * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-vm-agent).
+ * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-azure-windows-vm-agent).
1. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent`. 1. Ensure the virtual machine can run PowerShell scripts. 1. Ensure permissions on C:\Windows\temp haven't been changed.
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
na Previously updated : 2/17/2022 Last updated : 04/12/2023
If you use a new VNet, you can create a subnet and delegate the subnet to Azure
If the VNet is peered with another VNet, you can't expand the VNet address space. For that reason, the new delegated subnet needs to be created within the VNet address space. If you need to extend the address space, you must delete the VNet peering before expanding the address space.
+Ensure that the address space size of the Azure NetApp Files delegated subnet is smaller than the address space of the virtual network to avoid unforeseen issues.
+ ### UDRs and NSGs If the subnet has a combination of volumes with the Standard and Basic network features, user-defined routes (UDRs) and network security groups (NSGs) applied on the delegated subnets will only apply to the volumes with the Standard network features.
azure-portal Get Subscription Tenant Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md
Title: Get subscription and tenant IDs in the Azure portal description: To get them Previously updated : 04/21/2022 Last updated : 04/11/2023
Follow these steps to retrieve the ID for a subscription in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Under the Azure services heading, select **Subscriptions**. If you don't see **Subscriptions** here, use the search box to find it.
-1. Find the **Subscription ID** for the subscription shown in the second column. If no subscriptions appear, or you don't see the right one, you may need to [switch directories](set-preferences.md#switch-and-manage-directories) to show the subscriptions from a different Azure AD tenant.
-1. Copy the **Subscription ID**. You can paste this value into a text document or other location.
+1. Find the subscription in the list, and note the **Subscription ID** shown in the second column. If no subscriptions appear, or you don't see the right one, you may need to [switch directories](set-preferences.md#switch-and-manage-directories) to show the subscriptions from a different Azure AD tenant.
+1. To easily copy the **Subscription ID**, select the subscription name to display more details. Select the **Copy to clipboard** icon shown next to the **Subscription ID** in the **Essentials** section. You can paste this value into a text document or other location.
> [!TIP]
-> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription?view=latest&preserve-view=true) (Azure PowerShell) or [az account list](/cli/azure/account?view=azure-cli-latest&preserve-view=true) (Azure CLI).
+> You can also list your subscriptions and view their IDs programmatically by using [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) (Azure PowerShell) or [az account list](/cli/azure/account#az-account-list) (Azure CLI).
## Find your Azure AD tenant
Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal
1. Confirm that you are signed into the tenant for which you want to retrieve the ID. If not, [switch directories](set-preferences.md#switch-and-manage-directories) so that you're working in the right tenant. 1. Under the Azure services heading, select **Azure Active Directory**. If you don't see **Azure Active Directory** here, use the search box to find it. 1. Find the **Tenant ID** in the **Basic information** section of the **Overview** screen.
-1. Copy the **Tenant ID**. You can paste this value into a text document or other location.
+1. Copy the **Tenant ID** by selecting the **Copy to clipboard** icon shown next to it. You can paste this value into a text document or other location.
> [!TIP] > You can also find your tenant programmatically by using [Azure Powershell](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-powershell) or [Azure CLI](../active-directory/fundamentals/active-directory-how-to-find-tenant.md#find-tenant-id-with-cli).
Follow these steps to retrieve the ID for an Azure AD tenant in the Azure portal
## Next steps - Learn more about [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md).-- Learn how to [manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+- Learn how to manage Azure subscriptions [with Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli) or [with Azure PowerShell](/powershell/azure/manage-subscriptions-azureps).
- Learn how to [manage Azure portal settings and preferences](set-preferences.md).
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 11/7/2022 Last updated : 04/11/2023
To see a full list of directories to which you have access, select **All Directo
To mark a directory as a favorite, select its star icon. Those directories will be listed in the **Favorites** section.
-To switch to a different directory, select the directory that you want to work in, then select the **Switch** button in its row.
+To switch to a different directory, find the directory that you want to work in, then select the **Switch** button in its row.
:::image type="content" source="media/set-preferences/settings-directories-subscriptions-default-filter.png" alt-text="Screenshot showing the Directories settings pane.":::
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
Title: View and edit Azure Video Indexer insights
-description: This article demonstrates how to view and edit Azure Video Indexer insights.
+ Title: View Azure Video Indexer insights
+description: This article demonstrates how to view Azure Video Indexer insights.
Previously updated : 06/07/2022 Last updated : 04/12/2023
-# View and edit Azure Video Indexer insights
+# View Azure Video Indexer insights
-This article shows you how to view and edit the Azure Video Indexer insights of a video.
+This article shows you how to view the Azure Video Indexer insights of a video.
1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
azure-vmware Deploy Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-azure-vmware-solution.md
description: Learn how to use the information gathered in the planning stage to
Previously updated : 12/05/2022 Last updated : 4/12/2023
Last updated 12/05/2022
Once you've [planned your deployment](plan-private-cloud-deployment.md), you'll deploy and configure your Azure VMware Solution private cloud.
-The diagram shows the deployment workflow of Azure VMware Solution.
-- In this how-to, you'll: > [!div class="checklist"]
After you're finished, follow the recommended next steps at the end to continue
In the planning phase, you defined whether to use an *existing* or *new* ExpressRoute virtual network gateway. - >[!IMPORTANT] >[!INCLUDE [disk-pool-planning-note](includes/disk-pool-planning-note.md)]
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
Title: Deploy disaster recovery with VMware Site Recovery Manager
description: Deploy disaster recovery with VMware Site Recovery Manager (SRM) in your Azure VMware Solution private cloud. Previously updated : 07/28/2022 Last updated : 4/12/2023 # Deploy disaster recovery with VMware Site Recovery Manager
After installing VMware SRM and vSphere Replication, you need to complete the co
1. Enter the remote site details, and then select **NEXT**. >[!NOTE]
- >An Azure VMware Solution private cloud operates with an embedded Platform Services Controller (PSC), so only one local vCenter can be selected. If the remote vCenter Server is using an embedded Platform Service Controller (PSC), use the vCenter Server's FQDN (or its IP address) and port to specify the PSC.
+ >An Azure VMware Solution private cloud operates with an embedded Platform Services Controller (PSC), so only one local vCenter Server can be selected. If the remote vCenter Server is using an embedded Platform Service Controller (PSC), use the vCenter Server's FQDN (or its IP address) and port to specify the PSC.
> >The remote user must have sufficient permissions to perform the pairings. An easy way to ensure this is to give that user the VRM administrator and SRM administrator roles in the remote vCenter Server. For a remote Azure VMware Solution private cloud, cloudadmin is configured with those roles.
While Microsoft aims to simplify VMware SRM and vSphere Replication installation
## Scale limitations
-To learn about the limits for the VMware Site Recovery Manager Add-On with the Azure VMware Soltuion, check the [Azure subscription and service limits, quotas, and constraints.](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits)
+To learn about the limits for the VMware Site Recovery Manager Add-On with the Azure VMware Solution, check the [Azure subscription and service limits, quotas, and constraints.](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits)
## SRM licenses
VMware and Microsoft support teams will engage each other as needed to troublesh
- [vSphere Replication administration](https://docs.vmware.com/en/vSphere-Replication/8.2/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-876E-9D2E6BE4DDBA.html) - [Pre-requisites and Best Practices for SRM installation](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html) - [Network ports for SRM](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html)-- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
+- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
azure-vmware Use Hcx Run Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/use-hcx-run-commands.md
+
+ Title: Use HCX Run Commands
+description: Use HCX Run Commands in Azure VMware Solution
+++ Last updated : 04/11/2023++
+# Use HCX Run Commands
+In this article, you learn how to use HCX run commands. Use run commands to perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets. This document outlines the available HCX run commands and how to use them.
+
+This article describes two HCX commands: **Restart HCX Manager** and **Scale HCX Manager**.
+
+## Restart HCX Manager
+
+This Command checks for active HCX migrations and replications. If none are found, it restarts the HCX cloud manager (HCX VM's guest OS).
+
+1. Navigate to the run Command panel in an Azure VMware private cloud on the Azure portal.
+
+ :::image type="content" source="media/hcx-commands/run-command-private-cloud.png" alt-text="Diagram that lists all available Run command packages and Run commands." border="false" lightbox="media/hcx-commands/run-command-private-cloud.png":::
+
+1. Select the **Microsoft.AVS.Management** package dropdown menu and select the **Restart-HcxManager** command.
+1. Set parameters and select **Run**.
+Optional run command parameters.
+
+ If the parameters are used incorrectly, they can halt active migrations, and replications and cause other issues. Brief description of each parameter with an example of when it should be used.
+
+ **Hard Reboot Parameter** - Restarts the virtual machine instead of the default of a GuestOS Reboot. This command is like pulling the power plug on a machine. We don't want to risk disk corruption so this should only be used if a normal reboot fails, and we have exhausted all other options.
+
+ **Force Parameter** - If there are ANY active HCX migrations/replications, this parameter avoids the check for active HCX migrations/replications. If the Virtual machine is in a powered off state, this parameter powers the machine on.
+
+ **Scenario 1**: A customer has a migration that has been stuck in an active state for weeks and they need a restart of HCX for a separate issue. Without this parameter, the script will fail due to the detection of the active migration.
+ **Scenario 2**: The HCX Manager is powered off and the customer would like to power it back on.
+
+ :::image type="content" source="media/hcx-commands/restart-command.png" alt-text="Diagram that shows run command parameters for Restart-HcxManager command." border="false" lightbox="media/hcx-commands/restart-command.png":::
+
+1. Wait for command to finish. It may take few minutes for the HCX appliance to come online.
+
+## Scale HCX manager
+Use the Scale HCX manager run command to increase the resource allocation of your HCX Manager virtual machine to 8 vCPUs and 24-GB RAM from the default setting of 4 vCPUs and 12-GB RAM, ensuring scalability.
+
+**Scenario**: Mobility Optimize Networking (MON) requires HCX Scalability. For more details on [MON scaling](https://kb.vmware.com/s/article/88401)ΓÇ»
+
+>[!NOTE]
+> HCX cloud manager will be rebooted during this operation, and this may affect any ongoing migration processes.
+
+1. Navigate to the run Command panel on in an AVS private cloud on the Azure portal.
+
+1. Select the **Microsoft.AVS.Management** package dropdown menu and select the ``Set-HcxScaledCpuAndMemorySetting`` command.
+
+ :::image type="content" source="media/hcx-commands/set-hcx-scale.png" alt-text="Diagram that shows run command parameters for Set-HcxScaledCpuAndMemorySetting command." border="false" lightbox="media/hcx-commands/set-hcx-scale.png":::
+
+1. Agree to restart HCX by toggling ``AgreeToRestartHCX`` to **True**.
+ You must acknowledge that the virtual machine will be restarted.
+
+
+ >[!NOTE]
+ > If this required parameter is set to false that cmdlet execution will fail.
+
+1. Select **Run** to execute.
+ This process may take between 10-15 minutes.
+
+ >[!NOTE]
+ > HCX cloud manager will be unavailable during the scaling.
+
+ ## Next step
+To learn more about run commands, see [Run commands](concepts-run-command.md)
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md
Title: Quickstart - Create a Web PubSub instance from Azure portal
-description: Quickstart showing how to create a Web PubSub instance from Azure portal
--
+ Title: Create an Azure Web PubSub resource
+
+description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template
++ Previously updated : 11/08/2021 Last updated : 03/13/2023
+zone_pivot_groups: azure-web-pubsub-create-resource-methods
+# Create a Web PubSub resource
-# Quickstart: Create a Web PubSub instance from Azure portal
+## Prerequisites
+> [!div class="checklist"]
+> * An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already.
-This quickstart shows you how to create Azure Web PubSub instance from Azure portal.
+> [!TIP]
+> Web PubSub includes a generous **free tier** that can be used for testing and production purposes.
+
+
+## Create a resource from Azure portal
+1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter.
+ :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal.":::
-## Try the newly created instance
+2. Select **Web PubSub** from the search results, then select **Create**.
-> [!div class="nextstepaction"]
-> [Try the instance from the browser](./quickstart-live-demo.md#try-the-instance-with-an-online-demo)
+3. Enter the following settings.
-> [!div class="nextstepaction"]
-> [Try the instance with Azure CLI](./quickstart-cli-try.md#play-with-the-instance)
+ | Setting | Suggested value | Description |
+ | | - | -- |
+ | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. |
+ | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. |
+ | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. |
+ | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. |
+ | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) |
+ | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
+
+ :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal.":::
+
+4. Select **Create** to provision your Web PubSub resource.
++
+## Create a resource using Azure CLI
+
+The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation.
+
+> [!IMPORTANT]
+> This quickstart requires Azure CLI of version 2.22.0 or higher.
+
+## Create a resource group
++
+## Create a resource
+++
+## Create a resource using Bicep template
++
+## Review the Bicep file
+
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/azure-web-pubsub/).
++
+## Deploy the Bicep file
-## Next steps
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
-In real-world applications, you can use SDKs in various languages build your own application. We also provide Function extensions for you to build serverless applications easily.
+ # [CLI](#tab/CLI)
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+
+## Next step
+Now that you have created a resource, you are ready to put it to use.
+Next, you will learn how to subscribe and publish messages among your clients.
+> [!div class="nextstepaction"]
+> [PubSub among clients](quickstarts-pubsub-among-clients.md)
azure-web-pubsub Quickstarts Event Notifications From Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-event-notifications-from-clients.md
+
+ Title: Azure Web PubSub event notifications from clients
+
+description: A quickstarts guide that shows how to handle system and client events on an upstream application server
++++ Last updated : 04/12/2023++
+# Event notifications from clients
+In the first three articles of "quickstarts", we learned two useful communication patterns using Web PubSub for real-time messaging at scale ***(million+)***.
+- **Pub/Sub** among clients that free your application server from the complexity of managing persistent connections
+- **Push messages** to clients from your application server as soon as new data is available
+
+In this quickstart guide, we learn about the event system of Web PubSub so that your application server can react to events like when
+> [!div class="checklist"]
+> * a client is `connected`
+> * a client sends a `message`, which requires further processing
++
+## Prerequisites
+- A Web PubSub resource. If you haven't created one, you can follow the guidance: [Create a Web PubSub resource](./howto-develop-create-instance.md)
+- A code editor, such as Visual Studio Code
+- Install the dependencies for the language you plan to use
+
+# [JavaScript](#tab/javascript)
+[Node.js 12.x or above](https://nodejs.org)
+++
+## Create the application
+Web PubSub is a standalone service to your application server. While your application retains its role as a traditional HTTP server, Web PubSub takes care of the real-time message passing between your application server and the clients. We first create the client program and then the server program.
+
+### Create the client
+# [JavaScript](#tab/javascript)
+#### 1. Create a directory for the client app
+```bash
+mkdir eventHandlerDemo
+cd eventHandlerDemo
+
+# The SDK is available as an NPM module.
+npm install @azure/web-pubsub-client
+```
+
+#### 2. Connect to Web PubSub
+A client, be it a browser, a mobile app, or an IoT device, uses a **Client Access URL** to connect and authenticate with your resource.
+This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`.
+A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+
+![The diagram shows how to get **Client Access Url**.](./media/quickstarts-event-notifications-from-clients/generate-client-url-no-group.png)
+
+Create a file with name `client.js` and add the following code
+
+```javascript
+import { WebPubSubClient } from "@azure/web-pubsub-client";
+
+// Instantiates the client object
+// <client-access-url> is copied from Azure portal mentioned above.
+const client = new WebPubSubClient("<client-access-url>");
+
+// Registers a handler to the "connected" event
+client.on("connected", (e) => {
+ console.log(`Connection ${e.connectionId} is connected.`);
+});
+
+// You must invoke start() on the client object
+// to establish connection with your Web PubSub resource
+client.start();
+```
++
+### Create the application server
+
+# [JavaScript](#tab/javascript)
+
+#### 1. Install express.js and the Web PubSub server SDK
+
+```bash
+npm init -y
+npm install --save express
+
+# Installs the middleware from Web PubSub. This middleware will set up an endpoint for you.
+npm install --save @azure/web-pubsub-express
+```
+#### 2. Create a new file named "server.js" that sets up an empty express app
+
+```javascript
+import express from "express";
+
+const app = express();
+
+app.listen(8080, () => console.log('Server started, listening on port 8080'));
+```
+
+#### 3. Handle events
+
+With Web PubSub, when there are certain activities happening at the client side *(for example, when a client is `connected` or `disconnected` with your Web PubSub resource)*, your application server can set up handlers to react to these events.
+
+##### Here are two notable use cases:
+- when a client is connected, you can broadcast this status to all connected clients
+- when a client sends a message to your Web PubSub resource, you can persist the message in a database of your choice
+
+```javascript
+import express from "express";
+import { WebPubSubEventHandler } from ('@azure/web-pubsub-express');
+
+const app = express();
+
+const HUB_NAME = "myHub1";
+
+let handler = new WebPubSubEventHandler(HUB_NAME, {
+ path: '/eventhandler', // Exposes an endpoint
+ onConnected: async (req) => {
+ console.log(`${req.context.userId} connected`);
+ },
+});
+
+// Registers the middleware with the express app
+app.use(handler.getMiddleware());
+
+app.listen(8080, () => console.log('Server started, listening on port 8080'));
+```
+
+As configured in the code above, when a client connects with your Web PubSub resource, Web PubSub invokes the Webhook served by your application server at the path `/eventhandler`. Here, we simply print the `userId` to the console when a user is connected.
+++
+## Expose localhost
+Run the program, it should be running on `localhost` at port `8080`. For our purposes, it means your local express app can't be reached on the internet. So, Web PubSub can't invoke the Webhook served at the path `/eventhandler`.
+
+What we need is to expose localhost to be accessible on the internet. There are several tools available for this.
+> [!div class="checklist"]
+> * [ngrok](https://ngrok.com)
+> * [TunnelRelay](https://github.com/OfficeDev/microsoft-teams-tunnelrelay)
+
+# [ngrok](#tab/ngrok)
+
+#### 1. Download and install ngrok
+You can download ngrok from https://ngrok.com/download
+
+#### 2. Start ngrok and expose port 8080
+```bash
+ngrok http 8080
+```
+#### 3. Make note of the generated URL
+"ngrok" outputs a URL like this `https://<domain-name>.ngrok.io`. Now your port `8080` is accessible on the internet.
+++
+## Set event handler on your Web PubSub resource
+Now, we need to let your Web PubSub resource know about this Webhook URL. You can set the event handlers either from Azure portal or Azure CLI.
+
+# [Azure portal](#tab/portal)
+1. Select **"Settings"** from the menu and select **"Add"**
+
+1. Enter a hub name. For our purposes, enter "**myHub1**" and select "**Add**"
+
+1. In the event handler page, configure the following fields
+
+1. Save configuration
+
+# [Azure CLI](#tab/cli)
+> [!Important]
+> Replace &lt;**your-resource-group-name**&gt; with name of the actual resource group that contains your Web PubSub resource. Replace &lt;**your-unique-resource-name**&gt; with the actual name of your Web PubSub resource. Replace &lt;**domain-name**&gt; with the name ngrok outputted.
+
+```azurecli-interactive
+az webpubsub hub update --group "<your-resource-group-name>" --name "<your-unique-resource-name>" --hub-name "myHub1" --event-handler url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event="connected"
+```
++
+## Run the programs
+# [JavaScript](#tab/javascript)
+#### Start the application server
+> [!Important]
+> Make sure your localhost is exposed to the internet.
+
+```bash
+node server.js
+```
+
+#### Start the client program
+```bash
+node client.js
+```
+
+#### Observe the result
+You should see the `userId` printed to the console.
+++
+## Handle message event
+Besides system events like `connect`, `connected`, `disconnected`, a client can also send **custom** events.
+
+#### Modify the client program
+Stop your client program and add the following code to `client.js`
+```javascript
+// ...code from before
+
+client.start();
+
+// The name of the event is message and the content is in text format.
+client.sendEvent("message", "sending custom event!", "text");
+```
+
+#### Modify the server program
+Stop your client program and add the following code to `server.js`
+
+```javascript
+// ... code from before
+
+let handler = new WebPubSubEventHandler(HUB_NAME, {
+ path: "/eventhandler",
+ onConnected: async (req) => {
+ console.log(`"${req.context.userId}" is connected.`);
+ },
+ // This handler function will handle user system
+ handleUserEvent: async (req, res) => {
+ if (req.context.eventName === "message") {
+ console.log(`Received message: ${req.data}`);
+ // Additional logic to process the data,
+ // e.g save message content to database
+ // or broadcast the message to selected clients.
+ }
+ },
+});
+
+//... code from before
+```
+#### Start the client program and server program again
+You should see both the `userId` and the `Received message: sending custom event!` printed to the console.
+
+## Summary
+This tutorial provides you with a basic idea of how the event system works in Web PubSub. In real-world applications, the event system can help you implement more logic to process system and user generated events.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Explore code samples by language](https://aka.ms/awps/samples)
+> [!div class="nextstepaction"]
+> [Have fun with playable demos](https://azure.github.io/azure-webpubsub/)
azure-web-pubsub Quickstarts Pubsub Among Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-pubsub-among-clients.md
+
+ Title: PubSub among clients
+
+description: A quickstarts guide that shows to how to subscribe to messages in a group and send messages to a group without the involvement of a typical application server
++++ Last updated : 04/12/2023+
+ms.devlang: azurecli
+
+# Publish/subscribe among clients
+
+This quickstart guide demonstrates how to
+> [!div class="checklist"]
+> * **connect** to your Web PubSub resource
+> * **subscribe** to messages from groups
+> * **publish** messages to groups
+
+## Prerequisites
+- A Web PubSub resource. If you haven't created one, you can follow the guidance: [Create a Web PubSub resource](./howto-develop-create-instance.md)
+- A code editor, such as Visual Studio Code
+- Install the dependencies for the language you plan to use
+
+## Install the client SDK
+
+> [!NOTE]
+> This guide uses the client SDK provided by Web PubSub service, which is still in preview. The interface may change in later versions.
+
+# [JavaScript](#tab/javascript)
+
+```bash
+mkdir pubsub_among_clients
+cd pubsub_among_clients
+
+# The SDK is available as an NPM module.
+npm install @azure/web-pubsub-client
+```
+
+# [C#](#tab/csharp)
+
+```bash
+mkdir pubsub_among_clients
+cd pubsub_among_clients
+
+# Create a new .net console project
+dotnet new console
+
+# Install the client SDK, which is available as a NuGet package
+dotnet add package Azure.Messaging.WebPubSub.Client --prerelease
+```
++
+## Connect to Web PubSub
+
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+
+![The diagram shows how to get client access url.](./media/howto-websocket-connect/generate-client-url.png)
+
+As shown in the diagram above, the client has the permissions to send messages to and join a specific group named `group1`.
+
+# [JavaScript](#tab/javascript)
+
+Create a file with name `index.js` and add following code
+
+```javascript
+import { WebPubSubClient } from "@azure/web-pubsub-client";
+
+// Instantiate the client object.
+// <client-access-url> is copied from Azure portal mentioned above.
+const client = new WebPubSubClient("<client-access-url>");
+```
+
+# [C#](#tab/csharp)
+
+Edit the `Program.cs` file and add following code
+
+```csharp
+using Azure.Messaging.WebPubSub.Clients;
+
+// Instantiate the client object.
+// <client-access-uri> is copied from Azure portal mentioned above.
+var client = new WebPubSubClient(new Uri("<client-access-uri>"));
+```
++
+## Subscribe to a group
+
+To receive messages from groups, the client
+- must join the group it wishes to receive messages from
+- has a callback to handle `group-message` event
+
+The following code shows a client subscribes to messages from a group named `group1`.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// ...code from the last step
+
+// Provide callback to the "group-message" event.
+client.on("group-message", (e) => {
+ console.log(`Received message: ${e.message.data}`);
+});
+
+// Before joining group, you must invoke start() on the client object.
+client.start();
+
+// Join a group named "group1" to subscribe message from this group.
+// Note that this client has the permission to join "group1",
+// which was configured on Azure portal in the step of generating "Client Access URL".
+client.joinGroup("group1");
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+// ...code from the last step
+
+// Provide callback to group messages.
+client.GroupMessageReceived += eventArgs =>
+{
+ Console.WriteLine($"Receive group message from {eventArgs.Message.Group}: {eventArgs.Message.Data}");
+ return Task.CompletedTask;
+};
+
+// Before joining group, you must invoke start() on the client object.
+await client.StartAsync();
+
+// Join a group named "group1" to subscribe message from this group.
+// Note that this client has the permission to join "group1",
+// which was configured on Azure portal in the step of generating "Client Access URL".
+await client.JoinGroupAsync("group1");
+```
++
+## Publish a message to a group
+In the previous step, we've set up everything needed to receive messages from `group1`, now we send messages to that group.
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+// ...code from the last step
+
+// Send message "Hello World" in the "text" format to "group1".
+client.sendToGroup("group1", "Hello World", "text");
+```
+
+# [C#](#tab/csharp)
+
+```csharp
+// ...code from the last step
+
+// Send message "Hello World" in the "text" format to "group1".
+await client.SendToGroupAsync("group1", BinaryData.FromString("Hello World"), WebPubSubDataType.Text);
+```
++
+## Next steps
+By using the client SDK, you now know how to
+> [!div class="checklist"]
+> * **connect** to your Web PubSub resource
+> * **subscribe** to group messages
+> * **publish** messages to groups
+
+Next, you learn how to **push messages in real-time** from an application server to your clients.
+> [!div class="nextstepaction"]
+> [Push message from application server](quickstarts-push-messages-from-server.md)
azure-web-pubsub Quickstarts Push Messages From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-push-messages-from-server.md
+
+ Title: Azure Web PubSub push messages from server
+
+description: A quickstarts guide that shows how to proactively push messages from an upstream application server to connected clients
++++ Last updated : 04/12/2023++
+# Push messages from server
+
+The familiar HTTP request/response model was designed to be easy to work with and scalable. However, nowadays end users demand a lot more from the web than it was originally designed for. The HTTP protocol requires users to **initiate** the request in order to receive a response. But developers need a way to send data from server to clients without them asking for it; in other words, they need to **"push"** data to clients, like pushing the latest bidding price of a product on an auction site or fast-moving stock prices in a financial application.
++
+This quickstart guide demonstrates how to
+> [!div class="checklist"]
+> * **subscribe** to messages from an application server
+> * **push data** from an application server to **all** connected clients
+
+## Prerequisites
+
+- A Web PubSub resource. If you haven't created one, you can follow the guidance: [Create a Web PubSub resource](./howto-develop-create-instance.md)
+- A code editor, such as Visual Studio Code
+- Install the dependencies for the language you plan to use
++
+# [JavaScript](#tab/javascript)
+
+[Node.js](https://nodejs.org)
+
+# [C#](#tab/csharp)
+
+[.NET Core](https://dotnet.microsoft.com/download)
+
+# [Python](#tab/python)
+
+[Python](https://www.python.org/)
+
+# [Java](#tab/java)
+
+* [Java Development Kit (JDK)](/java/openjdk/install/).
+* [Apache Maven](https://maven.apache.org/download.cgi)
++
+## Create a subscriber client
+
+To subscribe to messages **pushed** from your application server, a client, be it a browser, a mobile app or an IoT device, needs to connect to your Web PubSub resource first, and listens for appropriate message event.
+
+# [JavaScript](#tab/javascript)
+#### Create a project directory named `subscriber` and install required dependencies
+
+```bash
+mkdir subscriber
+cd subscriber
+npm init -y
+
+# The client SDK is available as a module on NPM
+npm install @azure/web-pubsub-client
+```
+
+#### Connect to your Web PubSub resource and register a listener for the `server-message` event
+A client uses a ***Client Access URL*** to connect and authenticate with your resource.
+This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+
+![The diagram shows how to get client access url.](./media/quickstarts-push-messages-from-server/push-messages-from-server.png)
+
+As shown in the diagram above, the client joins the hub named `myHub1`.
+
+In the `subscriber` project folder, create a file named `subscribe.js` with the following code
+
+```javascript
+import { WebPubSubClient } from "@azure/web-pubsub-client";
+
+// Instantiates the client object
+// <client-access-url> is copied from Azure portal mentioned above
+const client = new WebPubSubClient("<client-access-url>")
+
+// Registers a handler for the "server-message" event
+client.on("server-message", (e) => {
+ console.log(`Received message ${e.message.data}`);
+
+// Before a client can receive a message,
+// you must invoke start() on the client object.
+await client.start();
+```
+
+#### Run the program
+
+```bash
+node subscribe.js
+```
+Now this client establishes a connection with your Web PubSub resource and is ready to receive messages pushed from your application server.
++
+# [C#](#tab/csharp)
+#### Create a project directory named `subscriber` and install required dependencies
+
+```bash
+mkdir subscriber
+cd subscriber
+
+# Create a .net console app
+dotnet new console
+
+# Add the client SDK
+dotnet add package Azure.Messaging.WebPubSub.Client --prerelease
+```
+
+#### Replace the code in the `Program.cs` with the following code
+
+```csharp
+using Azure.Messaging.WebPubSub.Clients;
+
+// Instantiates the client object
+// <client-access-uri> is copied from Azure portal mentioned above
+var client = new WebPubSubClient(new Uri("<client-access-uri>"));
+
+client.ServerMessageReceived += eventArgs =>
+{
+ Console.WriteLine($"Receive message: {eventArgs.Message.Data}");
+ return Task.CompletedTask;
+};
+
+await client.StartAsync();
+```
+
+#### Run the following command
+```bash
+dotnet run "myHub1"
+```
+Now this client establishes a connection with your Web PubSub resource and is ready to receive messages pushed from your application server.
+
+# [Python](#tab/python)
+
+#### Create a project directory named `subscriber` and install required dependencies:
+
+```bash
+mkdir subscriber
+cd subscriber
+
+# Create venv
+python -m venv env
+# Activate venv
+source ./env/bin/activ
+
+pip install azure-messaging-webpubsubservice
+pip install websock
+```
+
+#### Use the WebSocket API to connect to your Web PubSub resource. Create a `subscribe.py` file with the following code
+
+```python
+import asyncio
+import sys
+import websockets
+
+from azure.messaging.webpubsubservice import WebPubSubServiceClient
++
+async def connect(url):
+ async with websockets.connect(url) as ws:
+ print('connected')
+ while True:
+ print('Received message: ' + await ws.recv())
+
+if __name__ == '__main__':
+
+ if len(sys.argv) != 3:
+ print('Usage: python subscribe.py <connection-string> <hub-name>')
+ exit(1)
+
+ connection_string = sys.argv[1]
+ hub_name = sys.argv[2]
+
+ service = WebPubSubServiceClient.from_connection_string(connection_string, hub=hub_name)
+ token = service.get_client_access_token()
+
+ try:
+ asyncio.get_event_loop().run_until_complete(connect(token['url']))
+ except KeyboardInterrupt:
+ pass
+
+```
+
+The code creates a WebSocket connection that is connected to a hub in Web PubSub. A hub is a logical unit in Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Web PubSub.
+
+The Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication. The sample code uses `WebPubSubServiceClient.GetClientAccessUri()` in Web PubSub SDK to generate a URL to the service that contains the full URL with a valid access token.
+
+After the connection is established, your client will receive messages through the WebSocket connection. Use `await ws.recv()` to listen for incoming messages.
+
+#### Run the following command
+
+```bash
+python subscribe.py $connection_string "myHub1"
+```
+
+# [Java](#tab/java)
+
+#### Create a project directory named `pubsub`
+
+```cmd
+mkdir pubsub
+cd pubsub
+```
+
+#### Use Maven to create a new console app called `webpubsub-quickstart-subscriber`;
+
+```console
+mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-subscriber --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4
+
+cd webpubsub-quickstart-subscriber
+```
+
+#### Add WebSocket and Azure Web PubSub SDK to the `dependencies` node in `pom.xml`:
+
+* `azure-messaging-webpubsub`: Web PubSub service SDK for Java
+* `Java-WebSocket`: WebSocket client SDK for Java
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-webpubsub</artifactId>
+ <version>1.0.0</version>
+</dependen
+<dependency>
+ <groupId>org.java-websocket</groupId>
+ <artifactId>Java-WebSocket</artifactId>
+ <version>1.5.1</version>
+</dependen
+```
+In Web PubSub, you can connect to the service and subscribe to messages through WebSocket connections. WebSocket is a full-duplex communication channel allowing the service to push messages to your client in real time. You can use any API or library that supports WebSocket. For this sample, we use package [Java-WebSocket](https://github.com/TooTallNate/Java-WebSocket).
+
+1. Go to the */src/main/java/com/webpubsub/quickstart* directory.
+1. Edit replace the contents of the *App.java* file with the following code:
+
+ ```java
+ package com.webpubsub.quickstart;
+
+ import com.azure.messaging.webpubsub.*;
+ import com.azure.messaging.webpubsub.models.*;
+
+ import org.java_websocket.client.WebSocketClient;
+ import org.java_websocket.handshake.ServerHandshake;
+
+ import java.io.IOException;
+ import java.net.URI;
+ import java.net.URISyntaxException;
+
+ /**
+ * Connect to Azure Web PubSub service using WebSocket protocol
+ */
+ public class App
+ {
+ public static void main( String[] args ) throws IOException, URISyntaxException
+ {
+ if (args.length != 2) {
+ System.out.println("Expecting 2 arguments: <connection-string> <hub-name>");
+ return;
+ }
+
+ WebPubSubServiceClient service = new WebPubSubServiceClientBuilder()
+ .connectionString(args[0])
+ .hub(args[1])
+ .buildClient();
+
+ WebPubSubClientAccessToken token = service.getClientAccessToken(new GetClientAccessTokenOptions());
+
+ WebSocketClient webSocketClient = new WebSocketClient(new URI(token.getUrl())) {
+ @Override
+ public void onMessage(String message) {
+ System.out.println(String.format("Message received: %s", message));
+ }
+
+ @Override
+ public void onClose(int arg0, String arg1, boolean arg2) {
+ // TODO Auto-generated method stub
+ }
+
+ @Override
+ public void onError(Exception arg0) {
+ // TODO Auto-generated method stub
+ }
+
+ @Override
+ public void onOpen(ServerHandshake arg0) {
+ // TODO Auto-generated method stub
+ }
+
+ };
+
+ webSocketClient.connect();
+ System.in.read();
+ }
+ }
+
+ ```
+
+ This code creates a WebSocket connection that is connected to a hub in Azure Web PubSub. A hub is a logical unit in Azure Web PubSub where you can publish messages to a group of clients. [Key concepts](./key-concepts.md) contains the detailed explanation about the terms used in Azure Web PubSub.
+
+ The Web PubSub service uses [JSON Web Token (JWT)](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims) authentication. The sample code uses `WebPubSubServiceClient.GetClientAccessUri()` in Web PubSub SDK to generate a URL to the service that contains the full URL with a valid access token.
+
+ After connection is established, your client will receive messages through the WebSocket connection. Use `onMessage(String message)` to listen for incoming messages.
+
+#### Run the app with following command
+
+```console
+mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="$connection_string 'myHub1'"
+```
+++
+## Push messages from your application server
+Now that you have a client connected your Web PubSub resource, you can push messages from an application server at any time using the server SDK provided by Web PubSub.
+
+# [JavaScript](#tab/javascript)
+#### Create a ***new*** project directory named `publisher` and install required dependencies
+
+```bash
+mkdir publisher
+cd publisher
+
+npm init
+
+# This command installs the server SDK from NPM,
+# which is different from the client SDK you used in subscribe.js
+npm install --save @azure/web-pubsub
+```
+
+#### Create a `publish.js` file with the following code
+
+```javascript
+const { WebPubSubServiceClient } = require('@azure/web-pubsub');
+
+// This is the hub name we used on Azure portal when generating the Client Access URL.
+// It ensures this server can push messages to clients in the hub named "myHub1".
+const hub = "myHub1";
+
+let server = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, hub);
+
+// By default, the content type is `application/json`.
+// Specify contentType as `text/plain` for this demo.
+server.sendToAll(process.argv[2], { contentType: "text/plain" });
+```
+
+The `server.sendToAll()` call sends a message to all connected clients in a hub.
+
+#### Get the connection string
+> [!Important]
+> A connection string includes authorization information required for your application to access Web PubSub service. The access key inside the connection string is similar to a root password for your service.
+
+For this quickstart guide, we'll get it from Azure portal as shown below.
+![A diagram shows how to get client access url.](./media/quickstarts-push-messages-from-server/get-connection-string.png)
+
+#### Run the server program
+Run the following commands in a ***new*** command shell.
+
+```bash
+# Set the environment variable for your connection string.
+export WebPubSubConnectionString="<Put your connection string here>"
+
+node publish.js "Hello World"
+```
+
+#### Observe the received messages on the client side
++
+Try running the same "subscribe" program in multiple command shells to stimulate more than clients. As soon as the "publish" program is run, you should see messages being delivered in real-time to all these clients.
++
+# [C#](#tab/csharp)
+
+#### Create a project directory named `publisher` and install required dependencies:
+
+```bash
+mkdir publisher
+cd publisher
+dotnet new console
+dotnet add package Azure.Messaging.WebPubSub
+```
+
+#### Replace the `Program.cs` file with the following code
+
+```csharp
+using System;
+using System.Threading.Tasks;
+using Azure.Messaging.WebPubSub;
+
+namespace publisher
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ if (args.Length != 3) {
+ Console.WriteLine("Usage: publisher <connectionString> <hub> <message>");
+ return;
+ }
+ var connectionString = args[0];
+ var hub = args[1];
+ var message = args[2];
+
+ // Either generate the token or fetch it from server or fetch a temp one from the portal
+ var serviceClient = new WebPubSubServiceClient(connectionString, hub);
+ await serviceClient.SendToAllAsync(message);
+ }
+ }
+}
+
+```
+The `SendToAllAsync()` call sends a message to all connected clients in the hub.
+
+#### Run the server program to push messages to all connected clients
+
+```bash
+dotnet run $connection_string "myHub1" "Hello World"
+```
+
+#### Observe the received messages on the client side
+
+```text
+# On the command shell used for running the "subscribe" program, you should see the received the messaged logged there.
+# Try running the same "subscribe" program in multiple command shells, which simluates more than clients.
+# Try running the "publish" program several times and you see messages being delivered in real-time to all these clients.
+Message received: Hello World
+```
+
+# [Python](#tab/python)
+
+1. First, create a project directory named `publisher` and install required dependencies:
+
+ ```bash
+ mkdir publisher
+ cd publisher
+ # Create venv
+ python -m venv env
+ # Active venv
+ source ./env/bin/activate
+
+ pip install azure-messaging-webpubsubservice
+
+ ```
+
+1. Use the Azure Web PubSub SDK to publish a message to the service. Create a `publish.py` file with the below code:
+
+ ```python
+ import sys
+ from azure.messaging.webpubsubservice import WebPubSubServiceClient
+
+ if __name__ == '__main__':
+
+ if len(sys.argv) != 4:
+ print('Usage: python publish.py <connection-string> <hub-name> <message>')
+ exit(1)
+
+ connection_string = sys.argv[1]
+ hub_name = sys.argv[2]
+ message = sys.argv[3]
+
+ service = WebPubSubServiceClient.from_connection_string(connection_string, hub=hub_name)
+ res = service.send_to_all(message, content_type='text/plain')
+ print(res)
+ ```
+
+ The `send_to_all()` send the message to all connected clients in a hub.
+
+1. Run the following command:
+
+ ```bash
+ python publish.py $connection_string "myHub1" "Hello World"
+ ```
+
+1. Check the previous command shell to that the subscriber received the message:
+
+ ```text
+ Received message: Hello World
+ ```
+
+# [Java](#tab/java)
+
+1. Go to the `pubsub` directory. Use Maven to create a publisher console app `webpubsub-quickstart-publisher` and go to the *webpubsub-quickstart-publisher* directory:
+
+ ```console
+ mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-publisher --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4
+ cd webpubsub-quickstart-publisher
+ ```
+
+1. Add the Azure Web PubSub SDK dependency into the `dependencies` node of `pom.xml`:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-webpubsub</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+1. Use the Azure Web PubSub SDK to publish a message to the service. Go to the */src/main/java/com/webpubsub/quickstart* directory, open the *App.java* file in your editor, and replace the contents with the following code:
+
+ ```java
+
+ package com.webpubsub.quickstart;
+
+ import com.azure.messaging.webpubsub.*;
+ import com.azure.messaging.webpubsub.models.*;
+
+ /**
+ * Publish messages using Azure Web PubSub service SDK
+ *
+ */
+ public class App
+ {
+ public static void main( String[] args )
+ {
+ if (args.length != 3) {
+ System.out.println("Expecting 3 arguments: <connection-string> <hub-name> <message>");
+ return;
+ }
+
+ WebPubSubServiceClient service = new WebPubSubServiceClientBuilder()
+ .connectionString(args[0])
+ .hub(args[1])
+ .buildClient();
+ service.sendToAll(args[2], WebPubSubContentType.TEXT_PLAIN);
+ }
+ }
+
+ ```
+
+ The `sendToAll()` call sends a message to all connected clients in a hub.
+
+1. Go to the *webpubsub-quickstart-publisher* directory and run the project using the following command:
+
+ ```console
+ mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="$connection_string 'myHub1' 'Hello World'"
+ ```
+
+1. You can see that the previous subscriber received the message:
+
+ ```text
+ Message received: Hello World
+ ```
+++
+## Summary
+This quickstart demonstrates how easy it's to push messages from an application server to all connected clients in a hub. Additionally, Web PubSub allows you to push messages to
+
+> [!div class="checklist"]
+> * a subset of the clients in a **hub**
+> * a particular group in a **hub**
+> * a subset of clients in a **group**
+
+These APIs enable a wealth of use cases, allowing developers to focus on unique business logic while be assured that Web PubSub offers **low latency (<100ms)**, **high availability** and **massive scale (million+ simultaneous connections)**.
+
+## Next steps
+In the next step, we'll explore how to work with the event system of Web PubSub, necessary to build complete web applications.
+
+> [!div class="nextstepaction"]
+> [Event notifications from clients](./quickstarts-event-notifications-from-clients.md)
backup Backup Azure Security Feature Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature-cloud.md
Additionally, you can extend the retention duration for deleted backup data, ran
To disable soft delete, follow these steps:
-1. In the Azure portal, go to your vault, and then go to **Settings** -> **Properties**.
-2. In the properties pane, select **Security Settings** -> **Update**.
-3. In the security settings pane, under **Soft Delete**, select **Disable**.
+1. In the Azure portal, go to your *vault*, and then go to **Settings** > **Properties**.
+1. In the **Properties** pane, select **Security Settings Update**.
+1. In the **Security and soft delete settings** pane, clear the required checkboxes to disable soft delete.
-![Disable soft delete](./media/backup-azure-security-feature-cloud/disable-soft-delete.png)
### Disabling soft delete using Azure PowerShell
backup Backup Azure Troubleshoot Blob Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-blob-backup.md
+
+ Title: Troubleshoot Blob backup and restore issues
+description: In this article, learn about symptoms, causes, and resolutions of Azure Backup failures related to Blob backup and restore.
+ Last updated : 04/13/2023++++++
+# Troubleshoot Azure Blob backup
+
+This article provides troubleshooting information to address issues you encounter while configuring backup or restoring Azure Blob using the Azure Backup Service.
+
+## Common configuration errors
+
+### UserErrorMissingRequiredPermissions
+
+**Error code**: `UserErrorMissingRequiredPermissions`
+
+**Error message**: Appropriate permissions to perform the operation are missing.
+
+**Recommendation**: Ensure that you've granted [appropriate permissions](blob-backup-configure-manage.md?tabs=vaulted-backup#grant-permissions-to-the-backup-vault-on-storage-accounts).
+
+### UserErrorUnsupportedStorageAccountType
+
+**Error code**: `UserErrorUnsupportedStorageAccountType`
+
+**Error message**: The storage account type isn't supported for backup.
+
+**Recommendation**: Ensure that the storage account you've selected for backup is supported. [Learn more](blob-backup-support-matrix.md?tabs=vaulted-backup#limitations).
+
+### UserErrorMaxOrsPolicyExistOnStorageAccount
+**Error code**: `UserErrorMaxOrsPolicyExistOnStorageAccount`
+
+**Error message**: Maximum object replication policy exists on the source storage account.
+
+**Recommendation**: Ensure that you haven't reached the limit of replication rules supported on a storage account.
+
+## Common backup or restore errors
+
+### UserErrorAzureResourceNotFoundByPlugin
+
+**Error code**: `UserErrorAzureResourceNotFoundByPlugin `
+
+**Error message**: Unable to find the specified Azure resource.
+
+**Recommendation**: For a backup operation, ensure that the source account configured for backup is valid and not deleted. For restoration, check if both source and target storage accounts are existing.
+
+### UserErrorStorageAccountInLockedState
+
+**Error code**: `UserErrorStorageAccountInLockedState`
+
+**Error message**: Operation failed because storage account is in locked state.
+
+**Recommendation**: Ensure that there's no read only lock on the storage account. [Learn more](../storage/common/lock-account-resource.md?tabs=portal#configure-an-azure-resource-manager-lock).
+
+### UserErrorInvalidRecoveryPointInTime
+
+**Error code**: `UserErrorInvalidRecoveryPointInTime`
+
+**Error message**: Restore point in time is invalid.
+
+**Recommendation**: Ensure that the recovery time provided for restore exists and is in the correct format.
+
+### UserErrorInvalidRestorePrefixRange
+
+**Error code**: `UserErrorInvalidRestorePrefixRange`
+
+**Error message**: Restore Prefix Range for item-level restore is invalid.
+
+**Recommendation**: This error may occur if the backup service can't decipher the prefix range passed for the restore. Ensure that the prefix range provided for the restore is valid.
+
+### UserErrorPitrDisabledOnStorageAccount
+
+**Error code**: `UserErrorPitrDisabledOnStorageAccount`
+
+**Error message**: The required setting PITR is disabled on storage account.
+
+**Recommendation**: Enable that the point-in-restore setting on the storage account. [Learn more](../storage/blobs/point-in-time-restore-manage.md?tabs=portal#enable-and-configure-point-in-time-restore).
+
+### UserErrorImmutabilityPolicyConfigured
+
+**Error code**: `UserErrorImmutabilityPolicyConfigured`
+
+**Error message**: An Immutability Policy is configured on one or more containers, which is preventing the operation.
+
+**Recommendation**: This error may occur if you've configured an immutable policy on the container you're trying to restore. You need to remove the immutability policy or remove the impacted container from the restore intent, and then retry the operation. Learn [how to delete an unlocked policy](../storage/blobs/immutable-policy-configure-container-scope.md?tabs=azure-portal#modify-an-unlocked-retention-policy).
+
+### UserErrorRestorePointNotFound
+
+**Error code**: `UserErrorRestorePointNotFound`
+
+**Error message**: The restore point isn't available in backup vault.
+
+**Recommendation**: Ensure that the restore point ID is correct and the restore point didn't get deleted based on the backup retention settings. For a recent recovery point, ensure that the corresponding backup job is complete. We recommend you triggering the operation again using a valid restore point. If the issue persists, contact Microsoft support.
+
+### UserErrorTargetContainersExistOnAccount
+
+**Error code**: `UserErrorTargetContainersExistOnAccount`
+
+**Error message**: The containers that are part of restore request shouldn't exist on target storage account.
+
+**Recommendation**: Ensure that the target storage account doesn't have containers with the same name you're trying to restore. Choose another storage target or retry the restore operation after removing containers with the same name.
+
+### UserErrorBackupRequestThrottled
+
+**Error code**: `UserErrorBackupRequestThrottled`
+
+**Error message**: The backup request is being throttled as you've reached the limit for maximum number of backups on a given backup instance in a day.
+
+**Recommendation**: Wait for a day before triggering a new backup operation.
+
+### UserErrorRestorePointNotFoundInBackupVault
+
+**Error code**: `UserErrorRestorePointNotFoundInBackupVault`
+
+**Error message**: The restore point wasn't found in the Backup vault.
+
+**Recommendation**: Ensure that the restore point ID is correct and the restore point didn't get deleted based on the backup retention settings. Trigger the restore again using a valid restore point.
+
+### UserErrorOriginalLocationRestoreNotSupported
+
+**Error code**: `UserErrorOriginalLocationRestoreNotSupported`
+
+**Error message**: Original location restores not supported for vaulted blob backup.
+
+**Recommendation**: Choose an alternate target storage account and trigger the restore operation.
+
+### UserErrorNoContainersSelectedForOperation
+
+**Error code**: `UserErrorNoContainersSelectedForOperation`
+
+**Error message**: No containers selected for operation.
+
+**Recommendation**: Ensure that you've provided a valid list of containers to restore.
+
+### UserErrorIncorrectContainersSelectedForOperation
+
+**Error code**: `UserErrorIncorrectContainersSelectedForOperation`
+
+**Error message**: Incorrect containers selected for operation.
+
+**Recommendation**: Select valid list of containers and trigger the operation.
+
+### UserErrorCrossTenantOrsPolicyDisabled
+
+**Error code**: `UserErrorCrossTenantOrsPolicyDisabled`
+
+**Error message**: Cross tenant object replication policy disabled.
+
+**Recommendation**: Enable cross-tenant object replication policy on storage account and trigger the operation again. To check this, go to the *storage account* > **Object replication** > **Advanced settings**, and ensure that the checkbox is enabled.
+
+### UserErrorPitrRestoreInProgress
+
+**Error code**: `UserErrorPitrRestoreInProgress`
+
+**Error message**: The operation can't be performed while a restore is in progress on the source account.
+
+**Recommendation**: You need to retrigger the operation once the in-progress restore completes.
backup Restore Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-database-postgresql.md
You can restore a database to any Azure PostgreSQL server of a different/same su
:::image type="content" source="./media/restore-azure-database-postgresql/restore-as-database-inline.png" alt-text="Screenshot showing the selected restore type as Restore as Database." lightbox="./media/restore-azure-database-postgresql/restore-as-database-expanded.png":::
+> [!IMPORTANT]
+> The DB user whose credentials were chosen via the key vault will have all the privileges over the restored database and any existing DB user boundaries will be overridden. For eg: If the backed up database had any DB user specific permissions/constraints such as DB user A can access few tables, and DB user B can access few other tables, such permissions will not be preserved after restore. If you want to preserve those permissions, use restore as files and use the pg_restore command with the relevant switch.
+ - **Restore as Files: Dump the backup files to the target storage account (blobs).** You can choose from the storage accounts across all subscriptions, but in the same region as that of the vault.
batch Batch Application Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-application-packages.md
Title: Deploy application packages to compute nodes
-description: Use the application packages feature of Azure Batch to easily manage multiple applications and versions for installation on Batch compute nodes.
+description: Learn how to use the application packages feature of Azure Batch to easily manage multiple applications and versions for installation on Batch compute nodes.
Previously updated : 04/13/2021 Last updated : 04/03/2023 ms.devlang: csharp # Deploy applications to compute nodes with Batch application packages
-Application packages can simplify the code in your Azure Batch solution and make it easier to manage the applications that your tasks run. With application packages, you can upload and manage multiple versions of applications your tasks run, including their supporting files. You can then automatically deploy one or more of these applications to the compute nodes in your pool.
+Application packages can simplify the code in your Azure Batch solution and make it easier to manage the applications that your tasks run. With application packages, you can upload and manage multiple versions of the applications your tasks run, including their supporting files. You can then automatically deploy one or more of these applications to the compute nodes in your pool.
-The APIs for creating and managing application packages are part of the [Batch Management .NET](batch-management-dotnet.md) library. The APIs for installing application packages on a compute node are part of the [Batch .NET](quick-run-dotnet.md) library. Comparable features are in the available Batch APIs for other languages.
+The APIs for creating and managing application packages are part of the [Batch Management .NET](batch-management-dotnet.md) library. The APIs for installing application packages on a compute node are part of the [Batch .NET](quick-run-dotnet.md) library. Comparable features are in the available Batch APIs for other programming languages.
This article explains how to upload and manage application packages in the Azure portal. It also shows how to install them on a pool's compute nodes with the [Batch .NET](quick-run-dotnet.md) library.
This article explains how to upload and manage application packages in the Azure
To use application packages, you need to [link an Azure Storage account](#link-a-storage-account) to your Batch account.
-There are restrictions on the number of applications and application packages within a Batch account and on the maximum application package size. For more information, see [Quotas and limits for the Azure Batch service](batch-quota-limit.md).
+There are restrictions on the number of applications and application packages within a Batch account and on the maximum application package size. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
> [!NOTE]
-> Batch pools created prior to July 5, 2017 do not support application packages (unless they were created after March 10, 2016 using Cloud Services Configuration). The application packages feature described here supersedes the Batch Apps feature available in previous versions of the service.
+> Batch pools created prior to July 5, 2017 do not support application packages (unless they were created after March 10, 2016 by using Cloud Services Configuration). The application packages feature described here supersedes the Batch Apps feature available in previous versions of the service.
## Understand applications and application packages
Within Azure Batch, an *application* refers to a set of versioned binaries that
Each *application package* is a .zip file that contains the application binaries and any supporting files. Only the .zip format is supported.
-![Diagram showing a high-level view of applications and application packages.](./media/batch-application-packages/app_pkg_01.png)
You can specify application packages at the pool or task level. -- **Pool application packages** are deployed to every node in the pool. Applications are deployed when a node joins a pool, and when it is rebooted or reimaged.
+- **Pool application packages** are deployed to every node in the pool. Applications are deployed when a node joins a pool and when it's rebooted or reimaged.
- Pool application packages are appropriate when all nodes in a pool will execute a job's tasks. You can specify one or more application packages to deploy when you create a pool. You can also add or update an existing pool's packages. To install a new package to an existing pool, you must restart its nodes.
+ Pool application packages are appropriate when all nodes in a pool run a job's tasks. You can specify one or more application packages to deploy when you create a pool. You can also add or update an existing pool's packages. To install a new package to an existing pool, you must restart its nodes.
-- **Task application packages** are deployed only to a compute node scheduled to run a task, just before running the task's command line. If the specified application package and version is already on the node, it is not redeployed and the existing package is used.
+- **Task application packages** are deployed only to a compute node scheduled to run a task, just before running the task's command line. If the specified application package and version is already on the node, it isn't redeployed and the existing package is used.
Task application packages are useful in shared-pool environments, where different jobs run on one pool, and the pool isn't deleted when a job completes. If your job has fewer tasks than nodes in the pool, task application packages can minimize data transfer, since your application is deployed only to the nodes that run tasks.
- Other scenarios that can benefit from task application packages are jobs that run a large application, but for only a few tasks. For example, task applications may be useful for a heavyweight pre-processing stage or a merge task.
+ Other scenarios that can benefit from task application packages are jobs that run a large application but for only a few tasks. For example, task applications might be useful for a heavyweight preprocessing stage or a merge task.
-With application packages, your pool's start task doesn't have to specify a long list of individual resource files to install on the nodes. You don't have to manually manage multiple versions of your application files in Azure Storage, or on your nodes. And you don't need to worry about generating [SAS URLs](../storage/common/storage-sas-overview.md) to provide access to the files in your Storage account. Batch works in the background with Azure Storage to store application packages and deploy them to compute nodes.
+With application packages, your pool's start task doesn't have to specify a long list of individual resource files to install on the nodes. You don't have to manually manage multiple versions of your application files in Azure Storage or on your nodes. And you don't need to worry about generating [SAS URLs](../storage/common/storage-sas-overview.md) to provide access to the files in your Azure Storage account. Batch works in the background with Azure Storage to store application packages and deploy them to compute nodes.
> [!NOTE]
-> The total size of a start task must be less than or equal to 32768 characters, including resource files and environment variables. If your start task exceeds this limit, using application packages is another option. You can also create a .zip file containing your resource files, upload it as a blob to Azure Storage, and then unzip it from the command line of your start task.
+> The total size of a start task must be less than or equal to 32,768 characters, including resource files and environment variables. If your start task exceeds this limit, using application packages is another option. You can also create a .zip file containing your resource files, upload the file as a blob to Azure Storage, and then unzip it from the command line of your start task.
## Upload and manage applications
-You can use the [Azure portal](https://portal.azure.com) or the Batch Management APIs to manage the application packages in your Batch account. The following sections explain how to link a storage account, and how to add and manage applications and application packages in the Azure portal.
+You can use the [Azure portal](https://portal.azure.com) or the Batch Management APIs to manage the application packages in your Batch account. The following sections explain how to link a storage account, you learn how to add and manage applications and application packages in the Azure portal.
> [!NOTE]
-> While you can define application values in the [Microsoft.Batch/batchAccounts](/azure/templates/microsoft.batch/batchaccounts) resource of an [ARM template](quick-create-template.md), it's not currently possible to use an ARM template to upload application packages to use in your Batch account. You must upload them to your linked storage account as described [below](#add-a-new-application).
+> While you can define application values in the [Microsoft.Batch/batchAccounts](/azure/templates/microsoft.batch/batchaccounts) resource of an [ARM template](quick-create-template.md), it's not currently possible to use an ARM template to upload application packages to use in your Batch account. You must upload them to your linked storage account as described in [Add a new application](#add-a-new-application).
### Link a storage account
-To use application packages, you must link an [Azure Storage account](accounts.md#azure-storage-accounts) to your Batch account. The Batch service will use the associated storage account to store your application packages. We recommend that you create a storage account specifically for use with your Batch account.
+To use application packages, you must link an [Azure Storage account](accounts.md#azure-storage-accounts) to your Batch account. The Batch service uses the associated storage account to store your application packages. Ideally, you should create a storage account specifically for use with your Batch account.
-If you have not yet configured a storage account, the Azure portal displays a warning the first time you select **Applications** in your Batch account. To link a storage account to your Batch account, select **Storage account** on the **Warning** window, and then select **Storage Account** again.
+If you haven't yet configured a storage account, the Azure portal displays a warning the first time you select **Applications** from the left navigation menu in your Batch account. To need to link a storage account to your Batch account:
-After you've linked the two accounts, Batch can automatically deploy the packages stored in the linked Storage account to your compute nodes.
+1. Select the **Warning** window that states, "No Storage account configured for this batch account."
+1. Then choose **Storage Account set...** on the next page.
+1. Choose the **Select a storage account** link in the **Storage Account Information** section.
+1. Select the storage account you want to use with this batch account in the list on the **Choose storage account** pane.
+1. Then select **Save** on the top left corner of the page.
-> [!IMPORTANT]
-> You can't use application packages with Azure Storage accounts configured with [firewall rules](../storage/common/storage-network-security.md), or with **Hierarchical namespace** set to **Enabled**.
-
-The Batch service uses Azure Storage to store your application packages as block blobs. You are [charged as normal](https://azure.microsoft.com/pricing/details/storage/) for the block blob data, and the size of each package can't exceed the maximum block blob size. For more information, see [Azure Storage scalability and performance targets for storage accounts](../storage/blobs/scalability-targets.md). To minimize costs, be sure to consider the size and number of your application packages, and periodically remove deprecated packages.
-
-### View current applications
-
-To view the applications in your Batch account, select **Applications** in the left navigation menu.
-
-![Screenshot of the Applications menu item in the Azure portal.](./media/batch-application-packages/app_pkg_02.png)
-
-Selecting this menu option opens the **Applications** window. This window displays the ID of each application in your account and the following properties:
+After you link the two accounts, Batch can automatically deploy the packages stored in the linked Storage account to your compute nodes.
-- **Packages**: The number of versions associated with this application.-- **Default version**: If applicable, the application version that will be installed if no version is specified when deploying the application.-- **Allow updates**: Specifies whether package updates and deletions are allowed.-
-To see the [file structure](files-and-directories.md) of the application package on a compute node, navigate to your Batch account in the Azure portal. Select **Pools**. then select the pool that contains the compute node. Select the compute node on which the application package is installed and open the **applications** folder.
-
-### View application details
-
-To see the details for an application, select it in the **Applications** window. You can configure the following settings for your application.
+> [!IMPORTANT]
+> You can't use application packages with Azure Storage accounts configured with [firewall rules](../storage/common/storage-network-security.md) or with **Hierarchical namespace** set to **Enabled**.
-- **Allow updates**: Indicates whether application packages can be [updated or deleted](#update-or-delete-an-application-package). The default is **Yes**. If set to **No**, existing application packages can't be updated or deleted, but new application package versions can still be added.-- **Default version**: The default application package to use when the application is deployed, if no version is specified.-- **Display name**: A friendly name that your Batch solution can use when it displays information about the application. For example, this name can be used in the UI of a service that you provide to your customers through Batch.
+The Batch service uses Azure Storage to store your application packages as block blobs. You're [charged as normal](https://azure.microsoft.com/pricing/details/storage/) for the block blob data, and the size of each package can't exceed the maximum block blob size. For more information, see [Scalability and performance targets for Blob storage](../storage/blobs/scalability-targets.md). To minimize costs, be sure to consider the size and number of your application packages, and periodically remove deprecated packages.
### Add a new application To create a new application, you add an application package and specify a unique application ID.
-In your Batch account, select **Applications** and then select **Add**.
+In your Batch account, select **Applications** from the left navigation menu, and then select **Add**.
-![Screenshot of the New application creation process in the Azure portal.](./media/batch-application-packages/app_pkg_05.png)
Enter the following information: - **Application ID**: The ID of your new application.-- **Version**": The version for the application package you are uploading.-- **Application package**: The .zip file containing the application binaries and supporting files that are required to execute the application.
+- **Version**": The version for the application package you're uploading.
+- **Application package**: The .zip file containing the application binaries and supporting files that are required to run the application.
The **Application ID** and **Version** you enter must follow these requirements:
The **Application ID** and **Version** you enter must follow these requirements:
- Must be unique within the Batch account. - IDs are case-preserving and case-insensitive.
-When you're ready, select **Submit**. After the .zip file has been uploaded to your Azure Storage account, the portal displays a notification. Depending on the size of the file that you are uploading and the speed of your network connection, this may take some time.
+When you're ready, select **Submit**. After the .zip file has been uploaded to your Azure Storage account, the portal displays a notification. Depending on the size of the file that you're uploading and the speed of your network connection, this process might take some time.
+
+### View current applications
+
+To view the applications in your Batch account, select **Applications** in the left navigation menu.
++
+Selecting this menu option opens the **Applications** window. This window displays the ID of each application in your account and the following properties:
+
+- **Packages**: The number of versions associated with this application.
+- **Default version**: If applicable, the application version that is installed if no version is specified when deploying the application.
+- **Allow updates**: Specifies whether package updates and deletions are allowed.
+
+To see the [file structure](files-and-directories.md) of the application package on a compute node, navigate to your Batch account in the Azure portal. Select **Pools**. Then select the pool that contains the compute node. Select the compute node on which the application package is installed and open the **applications** folder.
+
+### View application details
+
+To see the details for an application, select it in the **Applications** window. You can configure your application by selecting **Settings** in the left navigation menu.
+
+- **Allow updates**: Indicates whether application packages can be [updated or deleted](#update-or-delete-an-application-package). The default is **Yes**. If set to **No**, existing application packages can't be updated or deleted, but new application package versions can still be added.
+- **Default version**: The default application package to use when the application is deployed if no version is specified.
+- **Display name**: A friendly name that your Batch solution can use when it displays information about the application. For example, this name can be used in the UI of a service that you provide to your customers through Batch.
### Add a new application package
-To add an application package version for an existing application, select the application in the **Applications** section of your Batch account, then select **Add**.
+To add an application package version for an existing application, select the application on the **Applications** page of your Batch account. Then select **Add**.
As you did for the new application, specify the **Version** for your new package, upload your .zip file in the **Application package** field, and then select **Submit**. ### Update or delete an application package
-To update or delete an existing application package, select the application in the **Applications** section of your Batch account. Select the ellipsis in the row of the application package that you want to modify, then select the action that you want to perform.
+To update or delete an existing application package, select the application on the **Applications** page of your Batch account. Select the ellipsis in the row of the application package that you want to modify. Then select the action that you want to perform.
-![Screenshot showing the update and delete options for application packages in the Azure portal.](./media/batch-application-packages/app_pkg_07.png)
-If you select **Update**, you'll be able to upload a new .zip file. This will replace the previous .zip file that you uploaded for that version.
+If you select **Update**, you can upload a new .zip file. This file replaces the previous .zip file that you uploaded for that version.
-If you select **Delete**, you'll be prompted to confirm the deletion of that version. Once you select **OK**, Batch will delete the .zip file from your Azure Storage account. If you delete the default version of an application, the **Default version** setting is removed for that application.
+If you select **Delete**, you're prompted to confirm the deletion of that version. After you select **OK**, Batch deletes the .zip file from your Azure Storage account. If you delete the default version of an application, the **Default version** setting is removed for that application.
## Install applications on compute nodes
-Now that you've learned how to manage application packages in the Azure portal, we can discuss how to deploy them to compute nodes and run them with Batch tasks.
+You've learned how to manage application packages in the Azure portal. Now you can learn how to deploy them to compute nodes and run them with Batch tasks.
### Install pool application packages
-To install an application package on all compute nodes in a pool, specify one or more application package references for the pool. The application packages that you specify for a pool are installed on each compute node that joins the pool, and on any node that is rebooted or reimaged.
+To install an application package on all compute nodes in a pool, specify one or more application package references for the pool. The application packages that you specify for a pool are installed on each compute node that joins the pool and on any node that is rebooted or reimaged.
-In Batch .NET, specify one or more [CloudPool.ApplicationPackageReferences](/dotnet/api/microsoft.azure.batch.cloudpool.applicationpackagereferences) when you create a new pool, or for an existing pool. The [ApplicationPackageReference](/dotnet/api/microsoft.azure.batch.applicationpackagereference) class specifies an application ID and version to install on a pool's compute nodes.
+In Batch .NET, specify one or more [CloudPool.ApplicationPackageReferences](/dotnet/api/microsoft.azure.batch.cloudpool.applicationpackagereferences) when you create a new pool or when you use an existing pool. The [ApplicationPackageReference](/dotnet/api/microsoft.azure.batch.applicationpackagereference) class specifies an application ID and version to install on a pool's compute nodes.
```csharp // Create the unbound CloudPool
await myCloudPool.CommitAsync();
``` > [!IMPORTANT]
-> If an application package deployment fails, the Batch service marks the node [unusable](/dotnet/api/microsoft.azure.batch.computenode.state), and no tasks are scheduled for execution on that node. If this happens, restart the node to reinitiate the package deployment. Restarting the node also enables task scheduling again on the node.
+> If an application package deployment fails, the Batch service marks the node [unusable](/dotnet/api/microsoft.azure.batch.computenode.state) and no tasks are scheduled for execution on that node. If this happens, restart the node to reinitiate the package deployment. Restarting the node also enables task scheduling again on the node.
### Install task application packages
-Similar to a pool, you specify application package references for a task. When a task is scheduled to run on a node, the package is downloaded and extracted just before the task's command line is executed. If a specified package and version is already installed on the node, the package is not downloaded and the existing package is used.
+Similar to a pool, you specify application package references for a task. When a task is scheduled to run on a node, the package is downloaded and extracted just before the task's command line runs. If a specified package and version is already installed on the node, the package isn't downloaded and the existing package is used.
To install a task application package, configure the task's [CloudTask.ApplicationPackageReferences](/dotnet/api/microsoft.azure.batch.cloudtask.applicationpackagereferences) property:
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
## Execute the installed applications
-The packages that you've specified for a pool or task are downloaded and extracted to a named directory within the `AZ_BATCH_ROOT_DIR` of the node. Batch also creates an environment variable that contains the path to the named directory. Your task command lines use this environment variable when referencing the application on the node.
+The packages that you specify for a pool or task are downloaded and extracted to a named directory within the `AZ_BATCH_ROOT_DIR` of the node. Batch also creates an environment variable that contains the path to the named directory. Your task command lines use this environment variable when referencing the application on the node.
On Windows nodes, the variable is in the following format:
Windows:
AZ_BATCH_APP_PACKAGE_APPLICATIONID#version ```
-On Linux nodes, the format is slightly different. Periods (.), hyphens (-) and number signs (#) are flattened to underscores in the environment variable. Also, note that the case of the application ID is preserved. For example:
+On Linux nodes, the format is slightly different. Periods (.), hyphens (-) and number signs (#) are flattened to underscores in the environment variable. Also, the case of the application ID is preserved. For example:
``` Linux: AZ_BATCH_APP_PACKAGE_applicationid_version ```
-`APPLICATIONID` and `version` are values that correspond to the application and package version you've specified for deployment. For example, if you specified that version 2.7 of application *blender* should be installed on Windows nodes, your task command lines would use this environment variable to access its files:
+`APPLICATIONID` and `version` are values that correspond to the application and package version you've specified for deployment. For example, if you specify that version 2.7 of application *blender* should be installed on Windows nodes, your task command lines would use this environment variable to access its files:
``` Windows:
Linux:
AZ_BATCH_APP_PACKAGE_blender_2_7 ```
-When you upload an application package, you can specify a default version to deploy to your compute nodes. If you have specified a default version for an application, you can omit the version suffix when you reference the application. You can specify the default application version in the Azure portal, in the **Applications** window, as shown in [Upload and manage applications](#upload-and-manage-applications).
+When you upload an application package, you can specify a default version to deploy to your compute nodes. If you've specified a default version for an application, you can omit the version suffix when you reference the application. You can specify the default application version in the Azure portal, in the **Applications** window, as shown in [Upload and manage applications](#upload-and-manage-applications).
-For example, if you set "2.7" as the default version for application *blender*, and your tasks reference the following environment variable, then your Windows nodes will execute version 2.7:
+For example, if you set "2.7" as the default version for application *blender*, and your tasks reference the following environment variable, then your Windows nodes use version 2.7:
`AZ_BATCH_APP_PACKAGE_BLENDER`
CloudTask blenderTask = new CloudTask(taskId, commandLine);
If an existing pool has already been configured with an application package, you can specify a new package for the pool. This means: - The Batch service installs the newly specified package on all new nodes that join the pool and on any existing node that is rebooted or reimaged.-- Compute nodes that are already in the pool when you update the package references do not automatically install the new application package. These compute nodes must be rebooted or reimaged to receive the new package.
+- Compute nodes that are already in the pool when you update the package references don't automatically install the new application package. These compute nodes must be rebooted or reimaged to receive the new package.
- When a new package is deployed, the created environment variables reflect the new application package references. In this example, the existing pool has version 2.7 of the *blender* application configured as one of its [CloudPool.ApplicationPackageReferences](/dotnet/api/microsoft.azure.batch.cloudpool.applicationpackagereferences). To update the pool's nodes with version 2.76b, specify a new [ApplicationPackageReference](/dotnet/api/microsoft.azure.batch.applicationpackagereference) with the new version, and commit the change.
boundPool.ApplicationPackageReferences = new List<ApplicationPackageReference>
await boundPool.CommitAsync(); ```
-Now that the new version has been configured, the Batch service installs version 2.76b to any new node that joins the pool. To install 2.76b on the nodes that are already in the pool, reboot or reimage them. Note that rebooted nodes retain files from previous package deployments.
+Now that the new version has been configured, the Batch service installs version 2.76b to any new node that joins the pool. To install 2.76b on the nodes that are already in the pool, reboot or reimage them. Rebooted nodes retain files from previous package deployments.
## List the applications in a Batch account
batch Batch Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-ci-cd.md
Title: Use Azure Pipelines to build & deploy HPC solutions
-description: Learn how to deploy a build/release pipeline for an HPC application running on Azure Batch.
-- Previously updated : 03/04/2021
+ Title: Use Azure Pipelines to build and deploy an HPC solution
+description: Use Azure Pipelines CI/CD build and release pipelines to deploy Azure Resource Manager templates for an Azure Batch high performance computing (HPC) solution.
Last updated : 04/12/2023
-# Use Azure Pipelines to build and deploy HPC solutions
+# Use Azure Pipelines to build and deploy an HPC solution
-Tools provided by Azure DevOps can translate into automated building and testing of high performance computing (HPC) solutions. [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) provides a range of modern continuous integration (CI) and continuous deployment (CD) processes for building, deploying, testing, and monitoring software. These processes accelerate your software delivery, allowing you to focus on your code rather than support infrastructure and operations.
+Azure DevOps tools can automate building and testing Azure Batch high performance computing (HPC) solutions. [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) provides modern continuous integration (CI) and continuous deployment (CD) processes for building, deploying, testing, and monitoring software. These processes accelerate your software delivery, allowing you to focus on your code rather than support infrastructure and operations.
-This article explains how to set up CI/CD processes using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) for HPC solutions deployed on Azure Batch.
+This article shows how to set up CI/CD processes by using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) with Azure Resource Manager templates (ARM templates) to deploy HPC solutions on Azure Batch. The example creates a build and release pipeline to deploy an Azure Batch infrastructure and release an application package. The following diagram shows the general deployment flow, assuming the code is developed locally:
+
+![Diagram showing the flow of deployment in the pipeline.](media/batch-ci-cd/DeploymentFlow.png)
## Prerequisites
-To follow the steps in this article, you need an [Azure DevOps organization](/azure/devops/organizations/accounts/create-organization). You'll also need to [create a project in Azure DevOps](/azure/devops/organizations/projects/create-project).
+To follow the steps in this article, you need:
+
+- An [Azure DevOps organization](/azure/devops/organizations/accounts/create-organization), and an [Azure DevOps project](/azure/devops/organizations/projects/create-project) with an [Azure Repos](/azure/devops/repos/git/create-new-repo) repository created in the organization. You must have **Project Administrator**, **Build Administrator**, and **Release Administrator** roles in the Azure DevOps project.
+
+- An active Azure subscription with **Owner** or other role that includes role assignment abilities. For more information, see [Understand Azure role assignments](/azure/role-based-access-control/role-assignments).
+
+- A basic understanding of [source control](/azure/devops/user-guide/source-control) and [ARM template syntax](/azure/azure-resource-manager/templates/syntax).
-It's helpful to have a basic understanding of [Source control](/azure/devops/user-guide/source-control) and [Azure Resource Manager template syntax](../azure-resource-manager/templates/syntax.md) before you start.
+## Prepare the solution
-## Create an Azure Pipeline
+The example in this article uses several ARM templates and an existing open-source video processing application, [FFmpeg](https://ffmpeg.org). You can copy or download these resources and push them to your Azure Repos repository.
-In this example, you'll create a build and release pipeline to deploy an Azure Batch infrastructure and release an application package. Assuming that the code is developed locally, this is the general deployment flow:
+>[!IMPORTANT]
+>This example deploys Windows software on Windows-based Batch nodes. Azure Pipelines, ARM templates, and Batch also fully support Linux software and nodes.
-![Diagram showing the flow of deployment in the Pipeline,](media/batch-ci-cd/DeploymentFlow.png)
+### Understand the ARM templates
-This sample uses several Azure Resource Manager templates and existing binaries. You can copy these examples into your repository and push them to Azure DevOps.
+Three capability templates, similar to units or modules, implement specific pieces of functionality. An end-to-end solution template then deploys the underlying capability templates. This [linked template structure](/azure/azure-resource-manager/templates/deployment-tutorial-linked-template) allows each capability template to be individually tested and reused across solutions.
-### Understand the Azure Resource Manager templates
+![Diagram showing a linked template structure using ARM templates.](media/batch-ci-cd/ARMTemplateHierarchy.png)
-This example uses several Azure Resource Manager templates to deploy the solution. Three capability templates (similar to units or modules) are used to implement a specific piece of functionality. An end-to-end solution template (deployment.json) is then used to deploy those underlying capability templates. This [linked template structure ](../azure-resource-manager/templates/deployment-tutorial-linked-template.md) allows each capability template to be individually tested and reused across solutions.
+For detailed information about the templates, see the [Resource Manager template reference guide for Microsoft.Batch resource types](/azure/templates/microsoft.batch/allversions).
-![Diagram showing a linked template structure using Azure Resource Manager templates.](media/batch-ci-cd/ARMTemplateHierarchy.png)
+#### Storage account template
-This template defines an Azure storage account, which is required in order to deploy the application to the Batch account. For detailed information, see the [Resource Manager template reference guide for Microsoft.Storage resource types](/azure/templates/microsoft.storage/allversions).
+Save the following code as a file named *storageAccount.json*. This template defines an Azure Storage account, which is required to deploy the application to the Batch account.
```json {
This template defines an Azure storage account, which is required in order to de
} ```
-The next template defines an [Azure Batch account](accounts.md). The Batch account acts as a platform to run numerous applications across [pools](nodes-and-pools.md#pools). For detailed information, see the [Resource Manager template reference guide for Microsoft.Batch resource types](/azure/templates/microsoft.batch/allversions).
+#### Batch account template
+
+Save the following code as a file named *batchAccount.json*. This template defines a [Batch account](accounts.md). The Batch account acts as a platform to run applications across node [pools](nodes-and-pools.md#pools).
```json {
The next template defines an [Azure Batch account](accounts.md). The Batch accou
} ```
-The next template creates a Batch pool in the Batch account. For detailed information, see the [Resource Manager template reference guide for Microsoft.Batch resource types](/azure/templates/microsoft.batch/allversions).
+#### Batch pool template
+
+Save the following code as a file named *batchAccountPool.json*. This template creates a node pool and nodes in the Batch account.
```json {
The next template creates a Batch pool in the Batch account. For detailed inform
"deploymentConfiguration": { "virtualMachineConfiguration": { "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "18.04-LTS",
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2022-datacenter",
"version": "latest" },
- "nodeAgentSkuId": "batch.node.ubuntu 18.04"
+ "nodeAgentSkuId": "batch.node.windows amd64"
} },
- "vmSize": "Standard_D1_v2"
+ "vmSize": "Standard_D2s_v3"
} } ],
The next template creates a Batch pool in the Batch account. For detailed inform
} ```
-The final template acts as an orchestrator, deploying the three underlying capability templates.
+#### Orchestrator template
+
+Save the following code as a file named *deployment.json*. This final template acts as an orchestrator to deploy the three underlying capability templates.
```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
- "templateContainerUri": {
+ "StorageContainerUri": {
"type": "string", "metadata": { "description": "URI of the Blob Storage Container containing the Azure Resource Manager templates" } },
- "templateContainerSasToken": {
+ "StorageContainerSasToken": {
"type": "string", "metadata": { "description": "The SAS token of the container containing the Azure Resource Manager templates"
The final template acts as an orchestrator, deploying the three underlying capab
"properties": { "mode": "Incremental", "templateLink": {
- "uri": "[concat(parameters('templateContainerUri'), '/storageAccount.json', parameters('templateContainerSasToken'))]",
+ "uri": "[concat(parameters('StorageContainerUri'), 'arm-templates/storageAccount.json', parameters('StorageContainerSasToken'))]",
"contentVersion": "1.0.0.0" }, "parameters": {
The final template acts as an orchestrator, deploying the three underlying capab
"properties": { "mode": "Incremental", "templateLink": {
- "uri": "[concat(parameters('templateContainerUri'), '/batchAccount.json', parameters('templateContainerSasToken'))]",
+ "uri": "[concat(parameters('StorageContainerUri'), 'arm-templates/batchAccount.json', parameters('StorageContainerSasToken'))]",
"contentVersion": "1.0.0.0" }, "parameters": {
The final template acts as an orchestrator, deploying the three underlying capab
"properties": { "mode": "Incremental", "templateLink": {
- "uri": "[concat(parameters('templateContainerUri'), '/batchAccountPool.json', parameters('templateContainerSasToken'))]",
+ "uri": "[concat(parameters('StorageContainerUri'), 'arm-templates/batchAccountPool.json', parameters('StorageContainerSasToken'))]",
"contentVersion": "1.0.0.0" }, "parameters": {
The final template acts as an orchestrator, deploying the three underlying capab
} ```
-### Understand the HPC solution
+### Set up your repository
-As noted earlier, this sample uses several Azure Resource Manager templates and existing binaries. You can copy these examples into your repository and push them to Azure DevOps.
+Upload the ARM templates, FFmpeg app, and a YAML build definition file into your Azure Repos repository.
-For this solution, ffmpeg is used as the application package. You can [download the ffmpeg package](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08) if you don't have it already.
+1. Upload the four ARM templates to an *arm-templates* folder in your repository.
-![Screenshot of the repository structure.](media/batch-ci-cd/git-repository.jpg)
+1. For the application package, download and extract the [Windows 64-bit version of FFmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08), and upload it to a *hpc-application* folder in your repository.
-There are four main sections to this repository:
+1. For the build definition, save the following definition as a file named *hpc-app.build.yml*, and upload it to a *pipelines* folder in your repository.
-- An **arm-templates** folder, containing the Azure Resource Manager templates-- An **hpc-application** folder, containing the Windows 64-bit version of [ffmpeg 4.3.1](https://github.com/GyanD/codexffmpeg/releases/tag/4.3.1-2020-11-08).-- A **pipelines** folder, containing a YAML file that defines the build pipeline process.-- Optional: A **client-application** folder, which is a copy of the [Azure Batch .NET File Processing with ffmpeg](https://github.com/Azure-Samples/batch-dotnet-ffmpeg-tutorial) sample. This application is not needed for this article.
+ ```yml
+ # To publish an application into Batch, you need to
+ # first zip the file, and then publish an artifact, so
+ # you can take the necessary steps in your release pipeline.
+ steps:
+ # First, zip up the files required in the Batch account.
+ # For this instance, those are the ffmpeg files.
+ - task: ArchiveFiles@2
+ displayName: 'Archive applications'
+ inputs:
+ rootFolderOrFile: hpc-application
+ includeRootFolder: false
+ archiveFile: '$(Build.ArtifactStagingDirectory)/package/$(Build.BuildId).zip'
+ # Publish the zip file, so you can use it as part
+ # of your Release pipeline later.
+ - task: PublishPipelineArtifact@0
+ inputs:
+ artifactName: 'hpc-application'
+ targetPath: '$(Build.ArtifactStagingDirectory)/package'
+ ```
+When you're finished setting up your repository, the folder structure should have the following main sections:
-> [!NOTE]
-> This is just one example of a structure to a codebase. This approach is used for the purposes of demonstrating that application, infrastructure, and pipeline code are stored in the same repository.
+ - An *arm-templates* folder that contains the ARM templates.
+ - A *hpc-application* folder that contains ffmpeg.
+ - A *pipelines* folder that contains the YAML build definition file for the Build pipeline.
+
+ ![Screenshot of the repository structure.](media/batch-ci-cd/git-repository.png)
-Now that the source code is set up, you can begin the first build.
+ > [!NOTE]
+ > This example codebase structure demonstrates that you can store application, infrastructure, and pipeline code in the same repository.
-## Continuous integration
+## Create the Azure pipeline
-[Azure Pipelines](/azure/devops/pipelines/get-started/), within Azure DevOps Services, helps you implement a build, test, and deployment pipeline for your applications.
+After you set up the source code repository, use [Azure Pipelines](/azure/devops/pipelines/get-started/) to implement a build, test, and deployment pipeline for your application. In this stage of a pipeline, you typically run tests to validate code and build pieces of the software. The number and types of tests, and any other tasks that you run, depend on your overall build and release strategy.
-In this stage of your pipeline, tests are typically run to validate code and build the appropriate pieces of the software. The number and types of tests, and any additional tasks that you run will depend on your wider build and release strategy.
+### Create the Build pipeline
-## Prepare the HPC application
+In this section, you create a [YAML build pipeline](/azure/devops/pipelines/get-started-yaml) to work with the ffmpeg software that runs in the Batch account.
-In this section, you'll work with the **hpc-application** folder. This folder contains the software (ffmpeg) that will run within the Azure Batch account.
+1. In your Azure DevOps project, select **Pipelines** from the left navigation, and then select **New pipeline**.
-1. Navigate to the Builds section of Azure Pipelines in your Azure DevOps organization. Create a **New pipeline**.
+1. On the **Where is your code** screen, select **Azure Repos Git**.
- ![Screenshot of the New pipeline screen.](media/batch-ci-cd/new-build-pipeline.jpg)
+ ![Screenshot of the New pipeline screen.](media/batch-ci-cd/new-build-pipeline.png)
-1. You have two options to create a Build pipeline:
+1. On the **Select a repository** screen, select your repository.
- a. [Use the Visual Designer](/azure/devops/pipelines/get-started-designer). To do so, select "Use the visual designer" on the **New pipeline** page.
+ >[!NOTE]
+ >You can also create a build pipeline by using a visual designer. On the **New pipeline** page, select **Use the classic editor**. You can use a YAML template in the visual designer. For more information, see [Define your Classic pipeline](/azure/devops/pipelines/release/define-multistage-release-process).
- b. [Use YAML Builds](/azure/devops/pipelines/get-started-yaml). You can create a new YAML pipeline by clicking the Azure Repos or GitHub option on the **New pipeline** page. Alternatively, you can store the example below in your source control and reference an existing YAML file by selecting Visual Designer, then using the YAML template.
+1. On the **Configure your pipeline** screen, select **Existing Azure Pipelines YAML file**.
- ```yml
- # To publish an application into Azure Batch, we need to
- # first zip the file, and then publish an artifact, so that
- # we can take the necessary steps in our release pipeline.
- steps:
- # First, we Zip up the files required in the Batch Account
- # For this instance, those are the ffmpeg files
- - task: ArchiveFiles@2
- displayName: 'Archive applications'
- inputs:
- rootFolderOrFile: hpc-application
- includeRootFolder: false
- archiveFile: '$(Build.ArtifactStagingDirectory)/package/$(Build.BuildId).zip'
- # Publish that zip file, so that we can use it as part
- # of our Release Pipeline later
- - task: PublishPipelineArtifact@0
- inputs:
- artifactName: 'hpc-application'
- targetPath: '$(Build.ArtifactStagingDirectory)/package'
- ```
+1. On the **Select an existing YAML file** screen, select the *hpc-app.build.yml* file from your repository, and then select **Continue**.
-1. Once the build is configured as needed, select **Save & Queue**. If you have continuous integration enabled (in the **Triggers** section), the build will automatically trigger when a new commit to the repository is made, meeting the conditions set in the build.
+1. On the **Review your pipeline YAML** screen, review the build configuration, and then select **Run**, or select the dropdown caret next to **Run** and select **Save**. This template enables continuous integration, so the build automatically triggers when a new commit to the repository meets the conditions set in the build.
- ![Screenshot of an existing Build Pipeline.](media/batch-ci-cd/existing-build-pipeline.jpg)
+ ![Screenshot of an existing Build pipeline.](media/batch-ci-cd/review-pipeline.png)
-1. View live updates on the progress of your build in Azure DevOps by navigating to the **Build** section of Azure Pipelines. Select the appropriate build from your build definition.
+1. You can view live build progress updates. To see build outcomes, select the appropriate run from your build definition in Azure Pipelines.
- ![Screenshot of live outputs from build in Azure DevOps.](media/batch-ci-cd/Build-1.jpg)
+ ![Screenshot of live outputs from build in Azure Pipelines.](media/batch-ci-cd/first-build.png)
> [!NOTE]
-> If you use a client application to execute your HPC solution, you need to create a separate build definition for that application. You can find a number of how-to guides in the [Azure Pipelines](/azure/devops/pipelines/get-started/index) documentation.
+> If you use a client application to run your HPC solution, you need to create a separate build definition for that application. For how-to guides, see the [Azure Pipelines](/azure/devops/pipelines/get-started/index) documentation.
+
+### Create the Release pipeline
+
+You use an Azure Pipelines [Release pipeline](/azure/devops/pipelines/release/releases) to deploy your application and underlying infrastructure. Release pipelines enable CD and automate your release process. There are several steps to deploy your application and underlying infrastructure.
+
+The [linked templates](/azure/azure-resource-manager/templates/linked-templates) for this solution must be accessible from a public HTTP or HTTPS endpoint. This endpoint could be a GitHub repository, an Azure Blob Storage account, or another storage location. To ensure that the uploaded template artifacts remain secure, hold them in a private mode, but access them by using some form of shared access signature (SAS) token.
+
+The following example demonstrates how to deploy an infrastructure and application by using templates from an Azure Storage blob.
+
+#### Set up the pipeline
-## Continuous deployment
+1. In your Azure DevOps project, select **Pipelines** > **Releases** in the left navigation.
-Azure Pipelines is also used to deploy your application and underlying infrastructure. [Release pipelines](/azure/devops/pipelines/release) enable continuous deployment and automates your release process.
+1. On the next screen, select **New** > **New release pipeline**.
-### Deploy your application and underlying infrastructure
+1. On the **Select a template** screen, select **Empty job**, and then close the **Stage** screen.
-There are a number of steps involved in deploying the infrastructure. Because this solution uses [linked templates](../azure-resource-manager/templates/linked-templates.md), those templates will need to be accessible from a public endpoint (HTTP or HTTPS). This could be a repository on GitHub, or an Azure Blob Storage Account, or another storage location. The uploaded template artifacts can remain secure, as they can be held in a private mode but accessed using some form of shared access signature (SAS) token.
+1. Select **New release pipeline** at the top of the page and rename the pipeline to something relevant for your pipeline, such as *Deploy Azure Batch + Pool*.
-The following example demonstrates how to deploy an infrastructure with templates from an Azure Storage blob.
+ ![Screenshot of the initial release pipeline.](media/batch-ci-cd/rename.png)
-1. Create a **New Release Definition**, then select an empty definition. Rename the newly created environment to something relevant for your pipeline.
+1. In the **Artifacts** section, select **Add**.
- ![Screenshot of the initial release pipeline.](media/batch-ci-cd/Release-0.jpg)
+1. On the **Add an artifact** screen, select **Build** and then select your Build pipeline to get the output for the HPC application.
-1. Create a dependency on the Build Pipeline to get the output for the HPC application.
+ > [!NOTE]
+ > You can create a **Source alias** or accept the default. Take note of the **Source alias** value, as you need it to create tasks in the release definition.
- > [!NOTE]
- > Take note of the **Source Alias**, as this will be needed when tasks are created inside of the Release Definition.
+ ![Screenshot showing an artifact link to the hpc-application package in the build pipeline.](media/batch-ci-cd/build-artifact.png)
- ![Screenshot showing an artifact link to the HPCApplicationPackage in the appropriate build pipeline.](media/batch-ci-cd/Release-1.jpg)
+1. Select **Add**.
-1. Create a link to another artifact, this time, an Azure Repo. This is required to access the Resource Manager templates stored in your repository. As Resource Manager templates do not require compilation, you don't need to push them through a build pipeline.
+1. On the pipeline page, select **Add** next to **Artifacts** to create a link to another artifact, your Azure Repos repository. This link is required to access the ARM templates in your repository. ARM templates don't need compilation, so you don't need to push them through a build pipeline.
- > [!NOTE]
- > Once again, note the **Source Alias**, as this will be needed later.
+ > [!NOTE]
+ > Again note the **Source alias** value to use later.
- ![Screenshot showing an artifact link to the Azure Repos.](media/batch-ci-cd/Release-2.jpg)
+ ![Screenshot showing an artifact link to the Azure Repos repository.](media/batch-ci-cd/repo-artifact.png)
-1. Navigate to the **variables** section. You'll want to create a number of variables in your pipeline so that you don't have to re-enter the same information into multiple tasks. This example uses the following variables:
+1. Select the **Variables** tab. Create the following variables in your pipeline so you don't have to reenter the same information into multiple tasks.
- - **applicationStorageAccountName**: Name of the storage account that holds the HPC application binaries
- - **batchAccountApplicationName**: Name of the application in the Batch account
- - **batchAccountName**: Name of the Batch account
- - **batchAccountPoolName**: Name of the pool of VMs doing the processing
- - **batchApplicationId**: Unique ID for the Batch application
- - **batchApplicationVersion**: Semantic version of your Batch application (that is, the ffmpeg binaries)
- - **location**: Location for the Azure resources to be deployed
- - **resourceGroupName**: Name of the resource group to be created, and where your resources will be deployed
- - **storageAccountName**: Name of the storage account that holds the linked Resource Manager templates
+ |Name|Value|
+ |-|--|
+ |**applicationStorageAccountName**|Name for the storage account to hold the HPC application binaries.|
+ |**batchAccountApplicationName**|Name for the application in the Batch account.|
+ |**batchAccountName**|Name for the Batch account.|
+ |**batchAccountPoolName**|Name for the pool of virtual machines (VMs) to do the processing.|
+ |**batchApplicationId**|Unique ID for the Batch application, of the form:<br>`/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>^`<br>`/providers/Microsoft.Batch/batchAccounts/<batchAccountName>^`<br>`/applications/<batchAccountApplicationName>`.<br><br>Replace the `<subscriptionId>` placeholder with your Azure subscription ID, and the other placeholders with the values you set for the other variables in this list.|
+ |**batchApplicationVersion**|Semantic version of your Batch application, in this case *4.3.1*.|
+ |**location**|Azure region for the resources to be deployed.|
+ |**resourceGroupName**|Name for the resource group to deploy resources in.|
+ |**storageAccountName**|Name for the storage account to hold the linked ARM templates.|
+ |**StorageContainerSasToken**|`$(<referenceName>.StorageContainerSasToken)`. Replace the `<referenceName` placeholder with the **Reference name** value you configure in the **Output Variables** section of the following **Azure File Copy** step.
+ |**StorageContainerUri**|`$(<referenceName>.StorageContainerUri)`. Replace the `<referenceName>` placeholder with the **Reference name** value you configure in the **Output Variables** section of the Azure File Copy step.
- ![Screenshot showing variables set for the Azure Pipelines release.](media/batch-ci-cd/Release-4.jpg)
+ ![Screenshot showing variables set for the Azure Pipelines release.](media/batch-ci-cd/variables.png)
-1. Navigate to the tasks for the Dev environment. In the below snapshot, you can see six tasks. These tasks will: download the zipped ffmpeg files, deploy a storage account to host the nested Resource Manager templates, copy those Resource Manager templates to the storage account, deploy the batch account and required dependencies, create an application in the Azure Batch Account and upload the application package to the Azure Batch Account.
+1. Select the **Tasks** tab, and then select **Agent job**.
- ![Screenshot showing the tasks used to release the HPC Application to Azure Batch.](media/batch-ci-cd/Release-3.jpg)
+1. On the **Agent job** screen, under **Agent pool**, select **Azure Pipelines**.
-1. Add the **Download Pipeline Artifact (Preview)** task and set the following properties:
- - **Display Name:** Download ApplicationPackage to Agent
- - **The name of the artifact to download:** hpc-application
- - **Path to download to**: $(System.DefaultWorkingDirectory)
+1. Under **Agent Specification**, select **windows-latest**.
-1. Create a Storage Account to store your Azure Resource Manager templates. An existing storage account from the solution could be used, but to support this self-contained sample and isolation of content, you'll make a dedicated storage account.
+ ![Screenshot showing the Agent job settings.](media/batch-ci-cd/agent-job.png)
- Add the **Azure Resource Group Deployment** task and set the following properties:
- - **Display Name:** Deploy storage account for Resource Manager templates
- - **Azure Subscription:** Select the appropriate Azure subscription
- - **Action**: Create or update resource group
- - **Resource Group**: $(resourceGroupName)
- - **Location**: $(location)
- - **Template**: $(System.ArtifactsDirectory)/**{YourAzureRepoArtifactSourceAlias}**/arm-templates/storageAccount.json
- - **Override template parameters**: -accountName $(storageAccountName)
+#### Add tasks
-1. Upload the artifacts from source control into the storage account by using Azure Pipelines. As part of this Azure Pipelines task, the storage account container URI and SAS Token can be outputted to a variable in Azure Pipelines, allowing them to be reused throughout this agent phase.
+Create six tasks to:
- Add the **Azure File Copy** task and set the following properties:
- - **Source:** $(System.ArtifactsDirectory)/**{YourAzureRepoArtifactSourceAlias}**/arm-templates/
- - **Azure Connection Type**: Azure Resource Manager
- - **Azure Subscription:** Select the appropriate Azure subscription
- - **Destination Type**: Azure Blob
- - **RM Storage Account**: $(storageAccountName)
- - **Container Name**: templates
- - **Storage Container URI**: templateContainerUri
- - **Storage Container SAS Token**: templateContainerSasToken
+- Download the zipped ffmpeg files.
+- Deploy a storage account to host the nested ARM templates.
+- Copy the ARM templates to the storage account.
+- Deploy the Batch account and required dependencies.
+- Create an application in the Batch account.
+- Upload the application package to the Batch account.
-1. Deploy the orchestrator template. This template includes parameters for the storage account container URI and SAS token. The variables required in the Resource Manager template are either held in the variables section of the release definition, or were set from another Azure Pipelines task (for example, part of the Azure Blob Copy task).
+For each new task that the following steps specify:
- Add the **Azure Resource Group Deployment** task and set the following properties:
- - **Display Name:** Deploy Azure Batch
- - **Azure Subscription:** Select the appropriate Azure subscription
- - **Action**: Create or update resource group
- - **Resource Group**: $(resourceGroupName)
- - **Location**: $(location)
- - **Template**: $(System.ArtifactsDirectory)/**{YourAzureRepoArtifactSourceAlias}**/arm-templates/deployment.json
- - **Override template parameters**: `-templateContainerUri $(templateContainerUri) -templateContainerSasToken $(templateContainerSasToken) -batchAccountName $(batchAccountName) -batchAccountPoolName $(batchAccountPoolName) -applicationStorageAccountName $(applicationStorageAccountName)`
+1. Select the **+** symbol next to **Agent job** in the left pane.
+1. Search for and select the specified task in the right pane.
+1. Add or select the properties to configure the task.
+1. Select **Add**.
- A common practice is to use Azure Key Vault tasks. If the service principal connected to your Azure subscription has an appropriate access policies set, it can download secrets from an Azure Key Vault and be used as variables in your pipeline. The name of the secret will be set with the associated value. For example, a secret of sshPassword could be referenced with $(sshPassword) in the release definition.
+ ![Screenshot showing the tasks used to release the HPC application to Azure Batch.](media/batch-ci-cd/release-pipeline.png)
-1. The next steps call the Azure CLI. The first is used to create an application in Azure Batch and upload associated packages.
+Create the tasks as follows:
- Add the **Azure CLI** task and set the following properties:
- - **Display Name:** Create application in Azure Batch account
- - **Azure Subscription:** Select the appropriate Azure subscription
- - **Script Location**: Inline Script
- - **Inline Script**: `az batch application create --application-id $(batchApplicationId) --name $(batchAccountName) --resource-group $(resourceGroupName)`
+1. Select the **Download Pipeline Artifacts** task, and set the following properties:
+ - **Display name**: Enter *Download ApplicationPackage to Agent*.
+ - **Artifact name**: Enter *hpc-application*.
+ - **Destination directory**: Enter `$(System.DefaultWorkingDirectory)`.
-1. The second step is used to upload associated packages to the application (in this case, the ffmpeg files).
+1. Create an Azure Storage account to store your ARM templates. You could use an existing storage account, but to support this self-contained example and isolation of content, make a dedicated storage account.
- Add the **Azure CLI** task and set the following properties:
- - **Display Name:** Upload package to Azure Batch account
- - **Azure Subscription:** Select the appropriate Azure subscription
- - **Script Location**: Inline Script
- - **Inline Script**: `az batch application package create --application-id $(batchApplicationId) --name $(batchAccountName) --resource-group $(resourceGroupName) --version $(batchApplicationVersion) --package-file=$(System.DefaultWorkingDirectory)/$(Release.Artifacts.{YourBuildArtifactSourceAlias}.BuildId).zip`
+ Select the **ARM Template deployment: Resource Group scope** task, and set the following properties:
+ - **Display name:** Enter *Deploy storage account for ARM templates*.
+ - **Azure Resource Manager connection**: Select the appropriate Azure subscription.
+ - **Subscription:** Select the appropriate Azure subscription.
+ - **Action**: Select **Create or update resource group**.
+ - **Resource group**: Enter `$(resourceGroupName)`.
+ - **Location**: Enter `$(location)`.
+ - **Template**: Enter `$(System.ArtifactsDirectory)/<AzureRepoArtifactSourceAlias>/arm-templates/storageAccount.json`. Replace the `<AzureRepoArtifactSourceAlias>` placeholder with the repository Source alias you noted previously.
+ - **Override template parameters**: Enter `-accountName $(storageAccountName)`.
- > [!NOTE]
- > The version number of the application package is set to a variable. This allows overwriting previous versions of the package and lets you manually control the version number of the package pushed to Azure Batch.
+1. Upload the artifacts from source control into the storage account. Part of this Azure File Copy task outputs the Storage account container URI and SAS token to a variable, so they can be reused in later steps.
-1. Create a new release by selecting **Release > Create a new release**. Once triggered, select the link to your new release to view the status.
+ Select the **Azure File Copy** task, and set the following properties:
+ - **Display name:** Enter *AzureBlob File Copy*.
+ - **Source:** Enter `$(System.ArtifactsDirectory)/<AzureRepoArtifactSourceAlias>/arm-templates/`. Replace the `<AzureRepoArtifactSourceAlias>` placeholder with the repository Source alias you noted previously.
+ - **Azure Subscription:** Select the appropriate Azure subscription.
+ - **Destination Type**: Select **Azure Blob**.
+ - **RM Storage Account**: Enter `$(storageAccountName)`.
+ - **Container Name**: Enter *templates*.
+ - **Reference name**: Expand **Output Variables**, then enter *ffmpeg*.
-1. View the live output from the agent by selecting the **Logs** button underneath your environment.
+ >[!NOTE]
+ >If this step fails, make sure your Azure DevOps organization has **Storage Blob Contributor** role in the storage account.
- ![Screenshot showing status of the release.](media/batch-ci-cd/Release-5.jpg)
+1. Deploy the orchestrator ARM template to create the Batch account and pool. This template includes parameters for the Storage account container URI and SAS token. The variables required in the ARM template are held in the variables section of the release definition and were set from the AzureBlob File Copy task.
+
+ Select the **ARM Template deployment: Resource Group scope** task, and set the following properties:
+ - **Display name:** Enter *Deploy Azure Batch*.
+ - **Azure Resource Manager connection:** Select the appropriate Azure subscription.
+ - **Subscription:** Select the appropriate Azure subscription.
+ - **Action**: Select **Create or update resource group**.
+ - **Resource group**: Enter `$(resourceGroupName)`.
+ - **Location**: Enter `$(location)`.
+ - **Template location**: Select **URL of the file**.
+ - **Template link:** Enter `$(StorageContainerUri)arm-templates/deployment.json$(StorageContainerSasToken)`.
+ - **Override template parameters**: Enter `-StorageContainerUri $(StorageContainerUri) -StorageContainerSasToken $(StorageContainerSasToken) -applicationStorageAccountName $(applicationStorageAccountName) -batchAccountName $(batchAccountName) -batchAccountPoolName $(batchAccountPoolName)`.
+
+ A common practice is to use Azure Key Vault tasks. If the service principal connected to your Azure subscription has an appropriate access policy set, it can download secrets from Key Vault and be used as a variable in your pipeline. The name of the secret is set with the associated value. For example, you could reference a secret of **sshPassword** with *$(sshPassword)* in the release definition.
+
+1. Call Azure CLI to create an application in Azure Batch.
+
+ Select the **Azure CLI** task, and set the following properties:
+ - **Display name:** Enter *Create application in Azure Batch account*.
+ - **Azure Resource Manager connection:** Select the appropriate Azure subscription.
+ - **Script Type**: Select **PowerShell Core**.
+ - **Script Location**: Select **Inline script**.
+ - **Inline Script**: Enter `az batch application create --application-name $(batchAccountApplicationName) --name $(batchAccountName) --resource-group $(resourceGroupName)`.
+
+1. Call Azure CLI to upload associated packages to the application, in this case the ffmpeg files.
+
+ Select the **Azure CLI** task, and set the following properties:
+ - **Display name:** Enter *Upload package to Azure Batch account*.
+ - **Azure Resource Manager connection:** Select the appropriate Azure subscription.
+ - **Script Type**: Select **PowerShell Core**.
+ - **Script Location**: Select **Inline script**.
+ - **Inline Script**: Enter `az batch application package create --application-name $(batchAccountApplicationName) --name $(batchAccountName) --resource-group $(resourceGroupName) --version $(batchApplicationVersion) --package-file=$(System.DefaultWorkingDirectory)/$(Release.Artifacts.<AzureBuildArtifactSourceAlias>.BuildId).zip`. Replace the `<AzureBuildArtifactSourceAlias>` placeholder with the Build Source alias you noted previously.
+
+ > [!NOTE]
+ > The version number of the application package is set to a variable. The variable allows overwriting previous versions of the package and lets you manually control the package version pushed to Azure Batch.
+
+#### Create and run the release
+
+1. When you finish creating all the steps, select **Save** at the top of the pipeline page, and then select **OK**.
+
+1. Select **Create release** at the top of the page.
+
+1. To view live release status, select the link at the top of the page that says the release has been created.
+
+1. To view the log output from the agent, hover over the stage and then select the **Logs** button.
+
+ ![Screenshot showing status of the release.](media/batch-ci-cd/release.png)
## Test the environment
-Once the environment is set up, confirm the following tests can be completed successfully.
+Once the environment is set up, confirm that the following tests run successfully. Replace the placeholders with your resource group and Batch account values.
+
+#### Connect to the Batch account
-Connect to the new Azure Batch Account, using the Azure CLI from a PowerShell command prompt.
+Connect to the new Batch account by using Azure CLI from a command prompt.
-- Sign in to your Azure account with `az login` and follow the instructions to authenticate.-- Now authenticate the Batch account: `az batch account login -g <resourceGroup> -n <batchAccount>`
+1. Sign in to your Azure account with `az login` and follow the instructions to authenticate.
+1. Authenticate the Batch account with `az batch account login -g <resourceGroup> -n <batchAccount>`.
#### List the available applications ```azurecli
-az batch application list -g <resourcegroup> -n <batchaccountname>
+az batch application list -g <resourceGroup> -n <batchAccount>
```
-#### Check the pool is valid
+#### Check that the pool is valid
```azurecli az batch pool list ```
-Note the value of `currentDedicatedNodes` from the output of this command. This value is adjusted in the next test.
+In the command output, note the value of `currentDedicatedNodes` to adjust in the next test.
#### Resize the pool
-Resize the pool so there are compute nodes available for job and task testing, check with the pool list command to see the current status until the resizing has completed and there are available nodes
+Run the following command to resize the pool so there are compute nodes available for job and task testing. Replace the `<poolName>` placeholder with your pool name value, and the `<targetNumber>` placeholder with a number that's greater than the `currentDedicatedNodes` from the previous command output. Check status by running the `az batch pool list` command until the resizing completes and shows the target number of nodes.
```azurecli
-az batch pool resize --pool-id <poolname> --target-dedicated-nodes 4
+az batch pool resize --pool-id <poolname> --target-dedicated-nodes <target number>
``` ## Next steps See these tutorials to learn how to interact with a Batch account via a simple application. -- [Run a parallel workload with Azure Batch using the Python API](tutorial-parallel-python.md)-- [Run a parallel workload with Azure Batch using the .NET API](tutorial-parallel-dotnet.md)
+- [Run a parallel workload with Azure Batch by using the Python API](tutorial-parallel-python.md)
+- [Run a parallel workload with Azure Batch by using the .NET API](tutorial-parallel-dotnet.md)
batch Batch Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-diagnostics.md
Title: Metrics, alerts, and diagnostic logs
-description: Record and analyze diagnostic log events for Azure Batch account resources like pools and tasks.
+description: Learn how to record and analyze diagnostic log events for Azure Batch account resources like pools and tasks.
Previously updated : 04/13/2021 Last updated : 04/05/2023
Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and [diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) for resources in your Azure Batch account.
-You can collect and consume this data in a variety of ways to monitor your Batch account and diagnose issues. You can also configure [metric alerts](../azure-monitor/alerts/alerts-overview.md) so you receive notifications when a metric reaches a specified value.
+You can collect and consume this data in various ways to monitor your Batch account and diagnose issues. You can also configure [metric alerts](../azure-monitor/alerts/alerts-overview.md) so you receive notifications when a metric reaches a specified value.
## Batch metrics
-[Metrics](../azure-monitor/essentials/data-platform-metrics.md) are Azure telemetry data (also called performance counters) that are emitted by your Azure resources and consumed by the Azure Monitor service. Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events. These metrics can help identify trends and can be used for data analysis.
+[Metrics](../azure-monitor/essentials/data-platform-metrics.md) are Azure data (also called performance counters) that your Azure resources emit, and the Azure Monitor service consumes that data. Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events. These metrics can help identify trends and can be used for data analysis.
See the [list of supported Batch metrics](../azure-monitor/essentials/metrics-supported.md#microsoftbatchbatchaccounts). Metrics are: -- Enabled by default in each Batch account without additional configuration-- Generated every 1 minute-- Not persisted automatically, but have a 30-day rolling history. You can persist activity metrics as part of diagnostic logging.
+- Enabled by default in each Batch account without extra configuration.
+- Generated every 1 minute.
+- Not persisted automatically, but they have a 30-day rolling history. You can persist activity metrics as part of diagnostic logging.
## View Batch metrics
-In the Azure portal, the **Overview** page for the Batch account will show key node, core, and task metrics by default.
+In the Azure portal, the **Overview** page for the Batch account shows key node, core, and task metrics by default.
-To view additional metrics for a Batch account:
+To view other metrics for a Batch account:
-1. In the Azure portal, select **All services** > **Batch accounts**, and then select the name of your Batch account.
-1. Under **Monitoring**, select **Metrics**.
+1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
+1. Under **Monitoring** in the left side navigation menu, select **Metrics**.
1. Select **Add metric** and then choose a metric from the dropdown list.
-1. Select an **Aggregation** option for the metric. For count-based metrics (like "Dedicated Core Count" or "Low-Priority Node Count"), use the **Avg** aggregation. For event-based metrics (like "Pool Resize Complete Events"), use the **Count**" aggregation. Avoid using the **Sum** aggregation, which adds up the values of all data points received over the period of the chart.
-1. To add additional metrics, repeat steps 3 and 4.
+1. Select an **Aggregation** option for the metric. For count-based metrics (like "Dedicated Core Count" or "Low-Priority Node Count"), use the **Avg** aggregation. For event-based metrics (like "Pool Resize Complete Events"), use the **Count** aggregation. Avoid using the **Sum** aggregation, which adds up the values of all data points received over the period of the chart.
+1. To add other metrics, repeat steps 3 and 4.
+
+ :::image type="content" source="./media/batch-diagnostics/add-metric.png" alt-text="Screenshot of the metrics page of a batch account in the Azure portal. Metrics is highlighted in the left side navigation menu. The Metric and Aggregation options for a metric are highlighted as well.":::
+ You can also retrieve metrics programmatically with the Azure Monitor APIs. For an example, see [Retrieve Azure Monitor metrics with .NET](/samples/azure-samples/monitor-dotnet-metrics-api/monitor-dotnet-metrics-api/). > [!NOTE]
-> Metrics emitted in the last 3 minutes may still be aggregating, so values may be under-reported during this timeframe. Metric delivery is not guaranteed, and may be affected by out-of-order delivery, data loss, or duplication.
+> Metrics emitted in the last 3 minutes might still be aggregating, so values might be under-reported during this time frame. Metric delivery is not guaranteed and might be affected by out-of-order delivery, data loss, or duplication.
## Batch metric alerts
-You can configure near real-time metric alerts that trigger when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when the alert is "Activated" (when the threshold is crossed and the alert condition is met) as well as when it is "Resolved" (when the threshold is crossed again and the condition is no longer met).
+You can configure near real-time metric alerts that trigger when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when the alert is *Activated* (when the threshold is crossed and the alert condition is met). The alert also generates an alert when it's *Resolved* (when the threshold is crossed again and the condition is no longer met).
-Because metric delivery can be subject to inconsistencies such as out-of-order delivery, data loss, or duplication, we recommend avoiding alerts that trigger on a single data point. Instead, use thresholds to account for any inconsistencies such as out-of-order delivery, data loss, and duplication over a period of time.
+Because metric delivery can be subject to inconsistencies such as out-of-order delivery, data loss, or duplication, you should avoid alerts that trigger on a single data point. Instead, use thresholds to account for any inconsistencies such as out-of-order delivery, data loss, and duplication over a period of time.
-For example, you might want to configure a metric alert when your low priority core count falls to a certain level, so you can adjust the composition of your pools. For best results, set a period of 10 or more minutes, where the alert will be triggered if the average low priority core count falls below the threshold value for the entire period. This allows time for metrics to aggregate so that you get more accurate results.
+For example, you might want to configure a metric alert when your low priority core count falls to a certain level. You could then use this alert to adjust the composition of your pools. For best results, set a period of 10 or more minutes where the alert will be triggered if the average low priority core count falls lower than the threshold value for the entire period. This time period allows for metrics to aggregate so that you get more accurate results.
To configure a metric alert in the Azure portal:
-1. Select **All services** > **Batch accounts**, and then select the name of your Batch account.
-1. Under **Monitoring**, select **Alerts**, then select **New alert rule**.
-1. Select **Add condition**, then choose a metric.
-1. Select the desired values for **Chart period**, **Threshold**, **Operator**, and **Aggregation type**.
-1. Enter a **Threshold value** and select the **Unit** for the threshold. Then select **Done**.
-1. Add an [action group](../azure-monitor/alerts/action-groups.md) to the alert either by selecting an existing action group or creating a new action group.
-1. In the **Alert rule details** section, enter an **Alert rule name** and **Description**. If you want the alert to be enabled immediately, ensure that the **Enable alert rule upon creation** box is checked.
-1. Select **Create alert rule**.
+1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
+1. Under **Monitoring** in the left side navigation menu, select **Alerts**, and then select **Create** > **Alert Rule**.
+1. On the **Condition page**, select a **Signal** from the dropdown list.
+1. Enter the logic for your **Alert Rule** in the fields specific to the **Signal** you choose. The following screenshot shows the options for **Task Fail Events**.
+
+ :::image type="content" source="./media/batch-diagnostics/create-alert-rule.png" alt-text="Screenshot of the Conditions tab on the Create and alert rule page." lightbox="./media/batch-diagnostics/create-alert-rule-lightbox.png":::
-For more information about creating metric alerts, see [Understand how metric alerts work in Azure Monitor](../azure-monitor/alerts/alerts-metric-overview.md) and [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md).
+1. Enter the name for your alert on the **Details** page.
+1. Then select **Review + create** > **Create**.
-You can also configure a near real-time alert using the [Azure Monitor REST API](/rest/api/monitor/). For more information, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). To include job, task, or pool-specific information in your alerts, see [Azure Monitor log Alerts](../azure-monitor/alerts/alerts-log.md).
+For more information about creating metric alerts, see [Types of Azure Monitor alerts](../azure-monitor/alerts/alerts-metric-overview.md) and [Create a new alert rule](../azure-monitor/alerts/alerts-metric.md).
+
+You can also configure a near real-time alert by using the [Azure Monitor REST API](/rest/api/monitor/). For more information, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). To include job, task, or pool-specific information in your alerts, see [Create a new alert rule](../azure-monitor/alerts/alerts-log.md).
## Batch diagnostics [Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) contain information emitted by Azure resources that describe the operation of each resource. For Batch, you can collect the following logs: - **ServiceLog**: [events emitted by the Batch service](#service-log-events) during the lifetime of an individual resource such as a pool or task.-- **AllMetrics**: Metrics at the Batch account level.
+- **AllMetrics**: metrics at the Batch account level.
You must explicitly enable diagnostic settings for each Batch account you want to monitor.
A common scenario is to select an Azure Storage account as the log destination.
Alternately, you can: -- Stream Batch diagnostic log events to an [Azure Event Hub](../event-hubs/event-hubs-about.md). Event Hubs can ingest millions of events per second, which you can then transform and store using any real-time analytics provider.
+- Stream Batch diagnostic log events to [Azure Event Hubs](../event-hubs/event-hubs-about.md). Event Hubs can ingest millions of events per second, which you can then transform and store by using any real-time analytics provider.
- Send diagnostic logs to [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), where you can analyze them or export them for analysis in Power BI or Excel. > [!NOTE]
-> You may incur additional costs to store or process diagnostic log data with Azure services.
+> You might incur additional costs to store or process diagnostic log data with Azure services.
### Enable collection of Batch diagnostic logs
-To create a new diagnostic setting in the Azure portal, follow the steps below.
+To create a new diagnostic setting in the Azure portal, use the following steps.
-1. In the Azure portal, select **All services** > **Batch accounts**, and then select the name of your Batch account.
-2. Under **Monitoring**, select **Diagnostic settings**.
+1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
+2. Under **Monitoring** in the left side navigation menu, select **Diagnostic settings**.
3. In **Diagnostic settings**, select **Add diagnostic setting**. 4. Enter a name for the setting.
-5. Select a destination: **Send to Log Analytics**, **Archive to a storage account**, or **Stream to an event hub**. If you select a storage account, you can optionally select the number of days to retain data for each log. If you don't specify a number of days for retention, data is retained during the life of the storage account.
-6. Select **ServiceLog**, **AllMetrics**, or both.
+5. Select a destination: **Send to Log Analytics workspace**, **Archive to a storage account**, **Stream to an event hub**, or **Send to partner solution**. If you select a storage account, you can optionally select the number of days to retain data for each log. If you don't specify the number of days for retention, data is retained during the life of the storage account.
+6. Select any options in either the **Logs** or **Metrics** section.
7. Select **Save** to create the diagnostic setting.
-You can also enable log collection by [creating diagnostic settings in the Azure portal](../azure-monitor/essentials/diagnostic-settings.md), using a [Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md), or using Azure PowerShell or the Azure CLI. For more information, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
+The following screenshot shows an example diagnostic setting called *My diagnostic setting*. It sends **allLogs** and **AllMetrics** to a Log Analytics workspace.
++
+You can also enable log collection by [creating diagnostic settings in the Azure portal](../azure-monitor/essentials/diagnostic-settings.md) by using a [Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md). You can also use Azure PowerShell or the Azure CLI. For more information, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
### Access diagnostics logs in storage
BATCHACCOUNTS/MYBATCHACCOUNT/y=2018/m=03/d=05/h=22/m=00/PT1H.json
Each `PT1H.json` blob file contains JSON-formatted events that occurred within the hour specified in the blob URL (for example, `h=12`). During the present hour, events are appended to the `PT1H.json` file as they occur. The minute value (`m=00`) is always `00`, since diagnostic log events are broken into individual blobs per hour. (All times are in UTC.)
-Below is an example of a `PoolResizeCompleteEvent` entry in a `PT1H.json` log file. It includes information about the current and target number of dedicated and low-priority nodes, as well as the start and end time of the operation:
+The following example shows a `PoolResizeCompleteEvent` entry in a `PT1H.json` log file. It includes information about the current and target number of dedicated and low-priority nodes, as well as the start and end time of the operation:
```json { "Tenant": "65298bc2729a4c93b11c00ad7e660501", "time": "2019-08-22T20:59:13.5698778Z", "resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/BATCHACCOUNTS/MYBATCHACCOUNT/", "category": "ServiceLog", "operationName": "PoolResizeCompleteEvent", "operationVersion": "2017-06-01", "properties": {"id":"MYPOOLID","nodeDeallocationOption":"Requeue","currentDedicatedNodes":10,"targetDedicatedNodes":100,"currentLowPriorityNodes":0,"targetLowPriorityNodes":0,"enableAutoScale":false,"isAutoPool":false,"startTime":"2019-08-22 20:50:59.522","endTime":"2019-08-22 20:59:12.489","resultCode":"Success","resultMessage":"The operation succeeded"}} ```
-To access the logs in your storage account programmatically, use the Storage APIs.
+To access the logs in your storage account programmatically, use the [Storage APIs](/rest/api/storageservices/).
### Service log events
-Azure Batch service logs contain events emitted by the Batch service during the lifetime of an individual Batch resource, such as a pool or task. Each event emitted by Batch is logged in JSON format. For example, this is the body of a sample **pool create event**:
+Azure Batch service logs contain events emitted by the Batch service during the lifetime of an individual Batch resource, such as a pool or task. Each event emitted by Batch is logged in JSON format. The following example shows the body of a sample **pool create event**:
```json {
Azure Batch service logs contain events emitted by the Batch service during the
} ```
-Service log events emitted by the Batch service include the following:
+The Batch Service emits the following log events:
- [Pool create](batch-pool-create-event.md) - [Pool delete start](batch-pool-delete-start-event.md)
Service log events emitted by the Batch service include the following:
## Next steps -- Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Learn more about [monitoring Batch solutions](monitoring-overview.md).
+- [Overview of Batch APIs and tools](batch-apis-tools.md)
+- [Monitor Batch solutions](monitoring-overview.md)
batch Batch Job Prep Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-job-prep-release.md
Title: Job preparation and release tasks on Batch compute nodes description: Use job-level preparation tasks to minimize data transfer to Azure Batch compute nodes, and release tasks for node cleanup at job completion. Previously updated : 04/06/2023 Last updated : 04/11/2023 ms.devlang: csharp
An Azure Batch job often requires setup before its tasks are executed, and post-job maintenance when its tasks are completed. For example, you might need to download common task input data to your compute nodes, or upload task output data to Azure Storage after the job completes. You can use *job preparation* and *job release* tasks for these operations. - A job preparation task runs before a job's tasks, on all compute nodes scheduled to run at least one task.-- A job release task runs once the job is completed, on each node in the pool that executed at least one task.
+- A job release task runs once the job is completed, on each node in the pool that ran a job preparation task.
As with other Batch tasks, you can specify a command line to invoke when a job preparation or release task runs. Job preparation and release tasks offer familiar Batch task features such as:
The job preparation task runs only on nodes that are scheduled to run a task. Th
## Job release task
-Once you mark a job as completed, the job release task runs on each node in the pool that ran at least one task. You mark a job as completed by issuing a terminate request. This request sets the job state to *terminating*, terminates any active or running tasks associated with the job, and runs the job release task. The job then moves to the *completed* state.
+Once you mark a job as completed, the job release task runs on each node in the pool that ran a job preparation task. You mark a job as completed by issuing a terminate request. This request sets the job state to *terminating*, terminates any active or running tasks associated with the job, and runs the job release task. The job then moves to the *completed* state.
> [!NOTE] > Deleting a job also executes the job release task. However, if a job is already terminated, the release task doesn't run a second time if the job is later deleted.
batch Batch Job Task Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-job-task-error-checking.md
Title: Check for job and task errors
-description: Learn about errors to check for and how to troubleshoot jobs and tasks.
+description: Learn how to check for and handle errors that occur after Azure Batch jobs and tasks are submitted.
Previously updated : 09/08/2021 Last updated : 04/11/2023
-# Job and task error checking
+# Azure Batch job and task errors
-There are various errors that can occur when adding jobs and tasks. Detecting failures for these operations is straightforward because any failures are returned immediately by the API, CLI, or UI. However, there are also failures that can happen later, when jobs and tasks are scheduled and run.
+Various errors can happen when you add, schedule, or run Azure Batch jobs and tasks. It's straightforward to detect errors that occur when you add jobs and tasks. The API, command line, or user interface usually returns any failures immediately. This article covers how to check for and handle errors that occur after jobs and tasks are submitted.
-This article covers the errors that can occur after jobs and tasks are submitted and how to check for and handle them.
+## Job failures
-## Jobs
+A job is a group of one or more tasks, which specify command lines to run. You can specify the following optional parameters when you add a job. These parameters influence how the job can fail.
-A job is a grouping of one or more tasks, with the tasks actually specifying the command lines to be run.
+- [JobConstraints](/rest/api/batchservice/job/add#jobconstraints). You can optionally use the `maxWallClockTime` property to set the maximum amount of time a job can be active or running. If the job exceeds the `maxWallClockTime`, the job terminates with the `terminateReason` property set to `MaxWallClockTimeExpiry` in the [JobExecutionInformation](/rest/api/batchservice/job/get#jobexecutioninformation).
-When adding a job, the following parameters can be specified which can influence how the job can fail:
+- [JobPreparationTask](/rest/api/batchservice/job/add#jobpreparationtask). You can optionally specify a job preparation task to run on each compute node scheduled to run a job task. The node runs the job preparation task before the first time it runs a task for the job. If the job preparation task fails, the task doesn't run and the job doesn't complete.
-- [Job Constraints](/rest/api/batchservice/job/add#jobconstraints)
- - The `maxWallClockTime` property can optionally be specified to set the maximum amount of time a job can be active or running. If exceeded, the job will be terminated with the `terminateReason` property set in the [executionInfo](/rest/api/batchservice/job/get#jobexecutioninformation) for the job.
-- [Job Preparation Task](/rest/api/batchservice/job/add#jobpreparationtask)
- - If specified, a job preparation task is run the first time a task is run for a job on a node. The job preparation task can fail, which will lead to the task not being run and the job not completing.
-- [Job Release Task](/rest/api/batchservice/job/add#jobreleasetask)
- - A job release task can only be specified if a job preparation task is configured. When a job is being terminated, the job release task is run on the each of pool nodes where a job preparation task was run. A job release task can fail, but the job will still move to a `completed` state.
+- [JobReleaseTask](/rest/api/batchservice/job/add#jobreleasetask). You can optionally specify a job release task for jobs that have a job preparation task. When a job is being terminated, the job release task runs on each pool node that ran a job preparation task. If a job release task fails, the job still moves to a `completed` state.
+
+In the Azure portal, you can set these parameters in the **Job manager, preparation and release tasks** and **Advanced** sections of the Batch **Add job** screen.
### Job properties
-The following job properties should be checked for errors:
+Check the following job properties in the [JobExecutionInformation](/rest/api/batchservice/job/get#jobexecutioninformation) for errors:
+
+- The `terminateReason` property indicates `MaxWallClockTimeExpiry` if the job exceeded the `maxWallClockTime` specified in the job constraints and therefore the job terminated. This property can also be set to `taskFailed` if the job's `onTaskFailure` attribute is set to `performExitOptionsJobAction`, and a task fails with an exit condition that specifies a `jobAction` of `terminatejob`.
-- '[executionInfo](/rest/api/batchservice/job/get#jobexecutioninformation)':
- - The `terminateReason` property can have values to indicate that the `maxWallClockTime`, specified in the job constraints, was exceeded and therefore the job was terminated. It can also be set to indicate a task failed if the job `onTaskFailure` property was set appropriately.
- - The [schedulingError](/rest/api/batchservice/job/get#jobschedulingerror) property is set if there has been a scheduling error.
+- The [JobSchedulingError](/rest/api/batchservice/job/get#jobschedulingerror) property is set if there has been a scheduling error.
### Job preparation tasks
-If a [job preparation task](batch-job-prep-release.md#job-preparation-task) is specified for a job, then an instance of that task will be run the first time a task for the job is run on a node. The job preparation task configured on the job can be thought of as a task template, with multiple job preparation task instances being run, up to the number of nodes in a pool.
+An instance of a [job preparation task](batch-job-prep-release.md#job-preparation-task) runs on each compute node the first time the node runs a task for the job. You can think of the job preparation task as a task template, with multiple instances being run, up to the number of nodes in a pool. Check the job preparation task instances to determine if there were errors.
+
+You can use the [Job - List Preparation and Release Task Status](/rest/api/batchservice/job/listpreparationandreleasetaskstatus) API to list the execution status of all instances of job preparation and release tasks for a specified job. As with other tasks, [JobPreparationTaskExecutionInformation](/rest/api/batchservice/job/listpreparationandreleasetaskstatus#jobpreparationtaskexecutioninformation) is available with properties such as `failureInfo`, `exitCode`, and `result`.
-The job preparation task instances should be checked to determine if there were errors:
+When a job preparation task runs, the task that triggered the job preparation task moves to a [taskState](/rest/api/batchservice/task/get#taskstate) of `preparing`. If the job preparation task fails, the triggering task reverts to the `active` state and doesn't run.
-- When a job preparation task is run, then the task that triggered the job preparation task will move to a [state](/rest/api/batchservice/task/get#taskstate) of `preparing`; if the job preparation task then fails, the triggering task will revert to the `active` state and will not be run.-- All the instances of the job preparation task that have been run can be obtained from the job using the [List Preparation and Release Task Status](/rest/api/batchservice/job/listpreparationandreleasetaskstatus) API. As with any task, there is [execution information](/rest/api/batchservice/job/listpreparationandreleasetaskstatus#jobpreparationandreleasetaskexecutioninformation) available with properties such as `failureInfo`, `exitCode`, and `result`.-- If job preparation tasks fail, then the triggering job tasks will not be run, the job will not complete and will be stuck. The pool may go unutilized if there are no other jobs with tasks that can be scheduled.
+If a job preparation task fails, the triggering job task doesn't run. The job doesn't complete and is stuck. If there are no other jobs with tasks that can be scheduled, the pool might not be used.
### Job release tasks
-If a [job release task](batch-job-prep-release.md#job-release-task) is specified for a job, then when a job is being terminated, an instance of the job release task is run on each pool node where a job preparation task was run. The job release task instances should be checked to determine if there were errors:
+An instance of a [job release task](batch-job-prep-release.md#job-release-task) runs when the job is being terminated on each node that ran a job preparation task. Check the job release task instances to determine if there were errors.
+
+You can use the [Job - List Preparation and Release Task Status](/rest/api/batchservice/job/listpreparationandreleasetaskstatus) API to list the execution status of all instances of job preparation and release tasks for a specified job. As with other tasks, [JobReleaseTaskExecutionInformation](/rest/api/batchservice/job/listpreparationandreleasetaskstatus#jobreleasetaskexecutioninformation) is available with properties such as `failureInfo`, `exitCode`, and `result`.
+
+If one or more job release tasks fail, the job is still terminated and moves to a `completed` state.
+
+## Task failures
+
+Job tasks can fail for the following reasons:
+
+- The task command line fails and returns with a nonzero exit code.
+- One or more `resourceFiles` specified for a task don't download.
+- One or more `outputFiles` specified for a task don't upload.
+- The elapsed time for the task exceeds the `maxWallClockTime` property specified in the [TaskConstraints](/rest/api/batchservice/task/add#taskconstraints).
+
+In all cases, check the following properties for errors and information about the errors:
+
+- The [TaskExecutionInformation](/rest/api/batchservice/task/get#taskexecutioninformation) property has multiple properties that provide information about an error. The [taskExecutionResult](/rest/api/batchservice/task/get#taskexecutionresult) indicates if the task failed for any reason, and `exitCode` and `failureInfo` provide more information about the failure.
+
+- The task always moves to the `completed` [TaskState](/rest/api/batchservice/task/get#taskstate), whether it succeeded or failed.
+
+Consider the impact of task failures on the job and on any task dependencies. You can specify [ExitConditions](/rest/api/batchservice/task/add#exitconditions) to configure actions for dependencies and for the job.
-- All the instances of the job release task being run can be obtained from the job using the API [List Preparation and Release Task Status](/rest/api/batchservice/job/listpreparationandreleasetaskstatus). As with any task, there is [execution information](/rest/api/batchservice/job/listpreparationandreleasetaskstatus#jobpreparationandreleasetaskexecutioninformation) available with properties such as `failureInfo`, `exitCode`, and `result`.-- If one or more job release tasks fail, then the job will still be terminated and move to a `completed` state.
+- [DependencyAction](/rest/api/batchservice/task/add#dependencyaction) controls whether to block or run tasks that depend on the failed task.
+- [JobAction](/rest/api/batchservice/task/add#jobaction) controls whether the failed task causes the job to be disabled, terminated, or unchanged.
-## Tasks
+### Task command lines
-Job tasks can fail for multiple reasons:
+Task command lines don't run under a shell on compute nodes, so they can't natively use shell features such as environment variable expansion. To take advantage of such features, you must invoke the shell in the command line. For more information, see [Command-line expansion of environment variables](batch-compute-node-environment-variables.md#command-line-expansion-of-environment-variables).
-- The task command line fails, returning with a non-zero exit code.-- There are `resourceFiles` specified for a task, but there was a failure that meant one or more files didn't download.-- There are `outputFiles` specified for a task, but there was a failure that meant one or more files didn't upload.-- The elapsed time for the task, specified by the `maxWallClockTime` property in the task [constraints](/rest/api/batchservice/task/add#taskconstraints), was exceeded.
+Task command line output writes to *stderr.txt* and *stdout.txt* files. Your application might also write to application-specific log files. Make sure to implement comprehensive error checking for your application to promptly detect and diagnose issues.
-In all cases the following properties must be checked for errors and information about the errors:
+### Task logs
-- The tasks [executionInfo](/rest/api/batchservice/task/get#taskexecutioninformation) property contains multiple properties that provide information about an error. [result](/rest/api/batchservice/task/get#taskexecutionresult) indicates if the task failed for any reason, with `exitCode` and `failureInfo` providing more information about the failure.-- The task will always move to the `completed` [state](/rest/api/batchservice/task/get#taskstate), independent of whether it succeeded or failed.
+If the pool node that ran a task still exists, you can get and view the task log files. Several APIs allow listing and getting task files, such as [File - Get From Task](/rest/api/batchservice/file/getfromtask). You can also list and view log files for a task or node by using the [Azure portal](https://portal.azure.com).
-The impact of task failures on the job and any task dependencies must be considered. The [exitConditions](/rest/api/batchservice/task/add#exitconditions) property can be specified for a task to configure an action for dependencies and for the job.
+1. At the top of the **Overview** page for a node, select **Upload batch logs**.
-- For dependencies, [DependencyAction](/rest/api/batchservice/task/add#dependencyaction) controls whether the tasks dependent on the failed task are blocked or are run.-- For the job, [JobAction](/rest/api/batchservice/task/add#jobaction) controls whether the failed task leads to the job being disabled, terminated, or left unchanged.
+ ![Screenshot of a node overview page with Upload batch logs highlighted.](media/batch-job-task-error-checking/node-page.png)
-### Task command line failures
+1. On the **Upload Batch logs** page, select **Pick storage container**, select an Azure Storage container to upload to, and then select **Start upload**.
-When the task command line is run, output is written to `stderr.txt` and `stdout.txt`. In addition, the application may write to application-specific log files.
+ ![Screenshot of the Upload batch logs page.](media/batch-job-task-error-checking/upload-batch-logs.png)
-If the pool node on which a task has run still exists, then the log files can be obtained and viewed. For example, the Azure portal lists and can view log files for a task or a pool node. Multiple APIs also allow task files to be listed and obtained, such as [Get From Task](/rest/api/batchservice/file/getfromtask).
+1. You can view, open, or download the logs from the storage container page.
-Since pools and pool nodes are frequently ephemeral, with nodes being continuously added and deleted, we recommend saving log files. [Task output files](./batch-task-output-files.md) are a convenient way to save log files to Azure Storage.
+ ![Screenshot of task logs in a storage container.](media/batch-job-task-error-checking/task-logs.png)
-The command lines executed by tasks on compute nodes do not run under a shell, so they can't natively take advantage of shell features such as environment variable expansion. To take advantage of such features, you must [invoke the shell in the command line](batch-compute-node-environment-variables.md#command-line-expansion-of-environment-variables).
+### Output files
-### Output file failures
+Because Batch pools and pool nodes are often ephemeral, with nodes being continuously added and deleted, it's best to save the log files when the job runs. Task output files are a convenient way to save log files to Azure Storage. For more information, see [Persist task data to Azure Storage with the Batch service API](batch-task-output-files.md).
-On every file upload, Batch writes two log files to the compute node, `fileuploadout.txt` and `fileuploaderr.txt`. You can examine these log files to learn more about a specific failure. In cases where the file upload was never attempted, for example because the task itself couldn't run, then these log files will not exist.
+On every file upload, Batch writes two log files to the compute node, *fileuploadout.txt* and *fileuploaderr.txt*. You can examine these log files to learn more about a specific failure. If the file upload wasn't attempted, for example because the task itself couldn't run, these log files don't exist.
## Next steps -- Check that your application implements comprehensive error checking; it can be critical to promptly detect and diagnose issues.-- Learn more about [jobs and tasks](jobs-and-tasks.md) and [job preparation and release tasks](batch-job-prep-release.md).
+- Learn more about [Batch jobs and tasks](jobs-and-tasks.md) and [job preparation and release tasks](batch-job-prep-release.md).
+- Learn about [Batch pool and node errors](batch-pool-node-error-checking.md).
batch Batch Pool Node Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-node-error-checking.md
Title: Check for pool and node errors
-description: This article covers the background operations that can occur, along with errors to check for and how to avoid them when creating pools and nodes.
Previously updated : 03/15/2021
+ Title: Pool and node errors
+description: Learn about background operations, errors to check for, and how to avoid errors when you create Azure Batch pools and nodes.
Last updated : 04/11/2023
-# Check for pool and node errors
+# Azure Batch pool and node errors
-When you're creating and managing Azure Batch pools, some operations happen immediately. Detecting failures for these operations is usually straightforward, because they are returned immediately by the API, CLI, or UI. However, some operations are asynchronous and run in the background, taking several minutes to complete.
+Some Azure Batch pool creation and management operations happen immediately. Detecting failures for these operations is straightforward, because errors usually return immediately from the API, command line, or user interface. However, some operations are asynchronous, run in the background, and take several minutes to complete. This article describes ways to detect and avoid failures that can occur in the background operations for pools and nodes.
-Check that you've set your applications to implement comprehensive error checking, especially for asynchronous operations. This can help you promptly identify and diagnose issues.
-
-This article describes ways to detect and avoid failures in the background operations that can occur for pools and pool nodes.
+Make sure to set your applications to implement comprehensive error checking, especially for asynchronous operations. Comprehensive error checking can help you promptly identify and diagnose issues.
## Pool errors
+Pool errors might be related to resize timeout or failure, automatic scaling failure, or pool deletion failure.
+ ### Resize timeout or failure
-When creating a new pool or resizing an existing pool, you specify the target number of nodes. The create or resize operation completes immediately, but the actual allocation of new nodes or the removal of existing nodes might take several minutes. You can specify the resize timeout in the [create](/rest/api/batchservice/pool/add) or [resize](/rest/api/batchservice/pool/resize) API. If Batch can't obtain the target number of nodes during the resize timeout period, the pool goes into a steady state and reports resize errors.
+When you create a new pool or resize an existing pool, you specify the target number of nodes. The create or resize operation completes immediately, but the actual allocation of new nodes or removal of existing nodes might take several minutes. You can specify the resize timeout in the [Pool - Add](/rest/api/batchservice/pool/add) or [Pool - Resize](/rest/api/batchservice/pool/resize) APIs. If Batch can't allocate the target number of nodes during the resize timeout period, the pool goes into a steady state, and reports resize errors.
-The [ResizeError](/rest/api/batchservice/pool/get#resizeerror) property for the most recent evaluation lists the errors that occurred.
+The [resizeError](/rest/api/batchservice/pool/get#resizeerror) property lists the errors that occurred for the most recent evaluation.
Common causes for resize errors include: -- Resize timeout is too short
- - Under most circumstances, the default timeout of 15 minutes is long enough for pool nodes to be allocated or removed.
- - If you're allocating a large number of nodes, we recommend setting the resize timeout to 30 minutes. For example, when you're resizing to more than 1,000 nodes from an Azure Marketplace image, or to more than 300 nodes from a custom VM image.
-- Insufficient core quota
- - A Batch account is limited in the number of cores that it can allocate across all pools. Batch stops allocating nodes once that quota has been reached. You [can increase](./batch-quota-limit.md) the core quota so that Batch can allocate more nodes.
-- Insufficient subnet IPs when a [pool is in a virtual network](./batch-virtual-network.md)
- - A virtual network subnet must have enough unassigned IP addresses to allocate to every requested pool node. Otherwise, the nodes can't be created.
-- Insufficient resources when a [pool is in a virtual network](./batch-virtual-network.md)
- - You might create resources such as load-balancers, public IPs, and network security groups in the same subscription as the Batch account. Check that the subscription quotas are sufficient for these resources.
-- Large pools with custom VM images
- - Large pools that use custom VM images can take longer to allocate and resize timeouts can occur. See [Create a pool with the Azure Compute Gallery](batch-sig-images.md) for recommendations on limits and configuration.
+- **Resize timeout too short.** Usually, the default timeout of 15 minutes is long enough to allocate or remove pool nodes. If you're allocating a large number of nodes, such as more than 1,000 nodes from an Azure Marketplace image, or more than 300 nodes from a custom virtual machine (VM) image, you can set the resize timeout to 30 minutes.
+
+- **Insufficient core quota.** A Batch account is limited in the number of cores it can allocate across all pools, and stops allocating nodes once it reaches that quota. You can increase the core quota so Batch can allocate more nodes. For more information, see [Batch service quotas and limits](batch-quota-limit.md).
+
+- **Insufficient subnet IPs when a pool is in a virtual network**. A virtual network subnet must have enough IP addresses to allocate to every requested pool node. Otherwise, the nodes can't be created. For more information, see [Create an Azure Batch pool in a virtual network](batch-virtual-network.md).
+
+- **Insufficient resources when a pool is in a virtual network.** When you create a pool in a virtual network, you might create resources such as load balancers, public IPs, and network security groups (NSGs) in the same subscription as the Batch account. Make sure the subscription quotas are sufficient for these resources.
+
+- **Large pools with custom VM images.** Large pools that use custom VM images can take longer to allocate, and resize timeouts can occur. For recommendations on limits and configuration, see [Create a pool with the Azure Compute Gallery](batch-sig-images.md).
### Automatic scaling failures
-You can set Azure Batch to automatically scale the number of nodes in a pool. You define the parameters for the [automatic scaling formula for a pool](./batch-automatic-scaling.md). The Batch service will then use the formula to periodically evaluate the number of nodes in the pool and set a new target number.
+You can set Azure Batch to automatically scale the number of nodes in a pool, and you define the parameters for the automatic scaling formula for the pool. The Batch service then uses the formula to periodically evaluate the number of nodes in the pool and set new target numbers. For more information, see [Create an automatic formula for scaling compute nodes in a Batch pool](batch-automatic-scaling.md).
-The following types of issues can occur when using automatic scaling:
+The following issues can occur when you use automatic scaling:
- The automatic scaling evaluation fails. - The resulting resize operation fails and times out.-- A problem with the automatic scaling formula leads to incorrect node target values. The resize either works or times out.
+- A problem with the automatic scaling formula leads to incorrect node target values. The resize might either work or time out.
To get information about the last automatic scaling evaluation, use the [autoScaleRun](/rest/api/batchservice/pool/get#autoscalerun) property. This property reports the evaluation time, the values and result, and any performance errors.
The [pool resize complete event](./batch-pool-resize-complete-event.md) captures
### Pool deletion failures
-When you delete a pool that contains nodes, first Batch deletes the nodes. This can take several minutes to complete. After that, Batch deletes the pool object itself.
+To delete a pool that contains nodes, Batch first deletes the nodes, which can take several minutes to complete. Batch then deletes the pool object itself.
-Batch sets the [pool state](/rest/api/batchservice/pool/get#poolstate) to **deleting** during the deletion process. The calling application can detect if the pool deletion is taking too long by using the **state** and **stateTransitionTime** properties.
+Batch sets the [poolState](/rest/api/batchservice/pool/get#poolstate) to `deleting` during the deletion process. The calling application can detect if the pool deletion is taking too long by using the `state` and `stateTransitionTime` properties.
-If the pool is taking longer than expected, Batch will retry periodically until the pool can be successfully deleted. In some cases, the delay is due to an Azure service outage or other temporary issues. Other factors that can prevent a pool from successfully being deleted may require you to take actions to correct the issue. These factors include the following:
+If the pool deletion is taking longer than expected, Batch retries periodically until the pool is successfully deleted. In some cases, the delay is due to an Azure service outage or other temporary issues. Other factors that prevent successful pool deletion might require you to take action to correct the issue. These factors can include the following issues:
-- Resource locks have been placed on Batch-created resources, or on network resources used by Batch.-- Resources that you created have a dependency on a Batch-created resource. For instance, if you [create a pool in a virtual network](batch-virtual-network.md), Batch creates a network security group (NSG), a public IP address, and a load balancer. If you use these resources outside of the pool, the pool can't be deleted until that dependency is removed.-- The Microsoft.Batch resource provider was unregistered from the subscription that contains your pool.-- "Microsoft Azure Batch" no longer has the [Contributor or Owner role](batch-account-create-portal.md#allow-azure-batch-to-access-the-subscription-one-time-operation) to the subscription that contains your pool (for user subscription mode Batch accounts).
+- Resource locks might be placed on Batch-created resources, or on network resources that Batch uses.
+
+- Resources that you created might depend on a Batch-created resource. For instance, if you [create a pool in a virtual network](batch-virtual-network.md), Batch creates an NSG, a public IP address, and a load balancer. If you're using these resources outside the pool, you can't delete the pool.
+
+- The `Microsoft.Batch` resource provider might be unregistered from the subscription that contains your pool.
+
+- For user subscription mode Batch accounts, `Microsoft Azure Batch` might no longer have the **Contributor** or **Owner** role to the subscription that contains your pool. For more information, see [Allow Batch to access the subscription](batch-account-create-portal.md#allow-azure-batch-to-access-the-subscription-one-time-operation).
## Node errors
-Even when Batch successfully allocates nodes in a pool, various issues can cause some of the nodes to be unhealthy and unable to run tasks. These nodes still incur charges, so it's important to detect problems to avoid paying for nodes that can't be used. In addition to common node errors, knowing the current [job state](/rest/api/batchservice/job/get#jobstate) is useful for troubleshooting.
+Even when Batch successfully allocates nodes in a pool, various issues can cause some nodes to be unhealthy and unable to run tasks. These nodes still incur charges, so it's important to detect problems to avoid paying for nodes you can't use. Knowing about common node errors and knowing the current [jobState](/rest/api/batchservice/job/get#jobstate) is useful for troubleshooting.
### Start task failures
-You might want to specify an optional [start task](/rest/api/batchservice/pool/add#starttask) for a pool. As with any task, you can use a command line and resource files to download from storage. The start task is run for each node after it's been started. The **waitForSuccess** property specifies whether Batch waits until the start task completes successfully before it schedules any tasks to a node.
+You can specify an optional [startTask](/rest/api/batchservice/pool/add#starttask) for a pool. As with any task, the start task uses a command line and can download resource files from storage. The start task runs for each node when the node starts. The `waitForSuccess` property specifies whether Batch waits until the start task completes successfully before it schedules any tasks to a node. If you configure the node to wait for successful start task completion, but the start task fails, the node isn't usable but still incurs charges.
-What if you've configured the node to wait for successful start task completion, but the start task fails? In that case, the node will not be usable, but will still incur charges.
+You can detect start task failures by using the [taskExecutionResult](/rest/api/batchservice/computenode/get#taskexecutionresult) and [taskFailureInformation](/rest/api/batchservice/computenode/get#taskfailureinformation) properties of the top-level [startTaskInformation](/rest/api/batchservice/computenode/get#starttaskinformation) node property.
-You can detect start task failures by using the [result](/rest/api/batchservice/computenode/get#taskexecutionresult) and [failureInfo](/rest/api/batchservice/computenode/get#taskfailureinformation) properties of the top-level [startTaskInfo](/rest/api/batchservice/computenode/get#starttaskinformation) node property.
+A failed start task also causes Batch to set the [computeNodeState](/rest/api/batchservice/computenode/get#computenodestate) to `starttaskfailed`, if `waitForSuccess` was set to `true`.
-A failed start task also causes Batch to set the node [state](/rest/api/batchservice/computenode/get#computenodestate) to **starttaskfailed** if **waitForSuccess** was set to **true**.
+As with any task, there can be many causes for a start task failure. To troubleshoot, check the *stdout*, *stderr*, and any other task-specific log files.
-As with any task, there can be many causes for a start task failure. To troubleshoot, check the stdout, stderr, and any further task-specific log files.
-
-Start tasks must be re-entrant, as it is possible the start task is run multiple times on the same node; the start task is run when a node is reimaged or rebooted. In rare cases, a start task will be run after an event caused a node reboot, where one of the operating system or ephemeral disks was reimaged while the other wasn't. Since Batch start tasks (like all Batch tasks) run from the ephemeral disk, this is not normally a problem, but in some instances where the start task is installing an application to the operating system disk and keeping other data on the ephemeral disk, this can cause problems because things are out of sync. Protect your application accordingly if you are using both disks.
+Start tasks must be re-entrant, because the start task can run multiple times on the same node, for example when the node is reimaged or rebooted. In rare cases, when a start task runs after an event causes a node reboot, one operating system (OS) or ephemeral disk reimages while the other doesn't. Since Batch start tasks and all Batch tasks run from the ephemeral disk, this situation isn't usually a problem. However, in cases where the start task installs an application to the OS disk and keeps other data on the ephemeral disk, there can be sync problems. Protect your application accordingly if you use both disks.
### Application package download failure
-You can specify one or more application packages for a pool. Batch downloads the specified package files to each node and uncompresses the files after the node has started, but before tasks are scheduled. It's common to use a start task command line in conjunction with application packages. For example, to copy files to a different location or to run setup.
+You can specify one or more application packages for a pool. Batch downloads the specified package files to each node and uncompresses the files after the node starts, but before it schedules tasks. It's common to use a start task command with application packages, for example to copy files to a different location or to run setup.
-The node [errors](/rest/api/batchservice/computenode/get#computenodeerror) property reports a failure to download and un-compress an application package; the node state is set to **unusable**.
+If an application package fails to download and uncompress, the [computeNodeError](/rest/api/batchservice/computenode/get#computenodeerror) property reports the failure, and sets the node state to `unusable`.
### Container download failure
-You can specify one or more container references on a pool. Batch downloads the specified containers to each node. The node [errors](/rest/api/batchservice/computenode/get#computenodeerror) property reports a failure to download a container and sets the node state to **unusable**.
+You can specify one or more container references on a pool. Batch downloads the specified containers to each node. If the container fails to download, the [computeNodeError](/rest/api/batchservice/computenode/get#computenodeerror) property reports the failure, and sets the node state to `unusable`.
### Node OS updates
-For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but they can can interrupt task progress, especially if the tasks are long-running. You can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
+For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Although allowing automatic updates is recommended, updates can interrupt task progress, especially if the tasks are long-running. You can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
### Node in unusable state
-Azure Batch might set the [node state](/rest/api/batchservice/computenode/get#computenodestate) to **unusable** for many reasons. With the node state set to **unusable**, tasks can't be scheduled to the node, but it still incurs charges.
-
-Nodes in an **unusable** state, but without [errors](/rest/api/batchservice/computenode/get#computenodeerror) means that Batch is unable to communicate with the VM. In this case, Batch always tries to recover the VM. Batch will not automatically attempt to recover VMs that failed to install application packages or containers even though their state is **unusable**.
+Batch might set the [computeNodeState](/rest/api/batchservice/computenode/get#computenodestate) to `unusable` for many reasons. You can't schedule tasks to an `unusable` node, but the node still incurs charges.
-If Batch can determine the cause, the node [errors](/rest/api/batchservice/computenode/get#computenodeerror) property reports it.
+If Batch can determine the cause, the [computeNodeError](/rest/api/batchservice/computenode/get#computenodeerror) property reports it. If a node is in an `unusable` state, but has no [computeNodeError](/rest/api/batchservice/computenode/get#computenodeerror), it means Batch is unable to communicate with the VM. In this case, Batch always tries to recover the VM. However, Batch doesn't automatically attempt to recover VMs that failed to install application packages or containers, even if their state is `unusable`.
-Additional examples of causes for **unusable** nodes include:
+Other reasons for `unusable` nodes might include the following causes:
-- A custom VM image is invalid. For example, an image that's not properly prepared.
+- A custom VM image is invalid. For example, the image isn't properly prepared.
- A VM is moved because of an infrastructure failure or a low-level upgrade. Batch recovers the node.-- A VM image has been deployed on hardware that doesn't support it. For example, trying to run a CentOS HPC image on a [Standard_D1_v2](../virtual-machines/dv2-dsv2-series.md) VM.
+- A VM image has been deployed on hardware that doesn't support it. For example, a CentOS HPC image is deployed on a [Standard_D1_v2](/azure/virtual-machines/dv2-dsv2-series) VM.
- The VMs are in an [Azure virtual network](batch-virtual-network.md), and traffic has been blocked to key ports.-- The VMs are in a virtual network, but outbound traffic to Azure storage is blocked.-- The VMs are in a virtual network with a customer DNS configuration and the DNS server cannot resolve Azure storage.
+- The VMs are in a virtual network, but outbound traffic to Azure Storage is blocked.
+- The VMs are in a virtual network with a custom DNS configuration, and the DNS server can't resolve Azure storage.
### Node agent log files
-The Batch agent process that runs on each pool node can provide log files that might be helpful if you need to contact support about a pool node issue. Log files for a node can be uploaded via the Azure portal, Batch Explorer, or an [API](/rest/api/batchservice/computenode/uploadbatchservicelogs). It's useful to upload and save the log files. Afterward, you can delete the node or pool to save the cost of the running nodes.
+The Batch agent process that runs on each pool node provides log files that might help if you need to contact support about a pool node issue. You can upload log files for a node via the Azure portal, Batch Explorer, or the [Compute Node - Upload Batch Service Logs](/rest/api/batchservice/computenode/uploadbatchservicelogs) API. After you upload and save the log files, you can delete the node or pool to save the cost of running the nodes.
### Node disk full
-The temporary drive for a pool node VM is used by Batch for job files, task files, and shared files, such as the following:
+Batch uses the temporary drive on a node pool VM to store files such as the following job files, task files, and shared files:
-- Application packages files
+- Application package files
- Task resource files - Application-specific files downloaded to one of the Batch folders-- Stdout and stderr files for each task application execution
+- *Stdout* and *stderr* files for each task application execution
- Application-specific output files
-Some of these files are only written once when pool nodes are created, such as pool application packages or pool start task resource files. Even if only written once when the node is created, if these files are too large they could fill the temporary drive.
+Files like application packages or start task resource files write only once when Batch creates the pool node. Even though they only write once, if these files are too large they could fill the temporary drive.
-Other files are written out for each task that is run on a node, such as stdout and stderr. If a large number of tasks run on the same node and/or the task files are too large, they could fill the temporary drive.
+Other files, such as *stdout* and *stderr*, are written for each task that a node runs. If a large number of tasks run on the same node, or the task files are too large, they could fill the temporary drive.
-Additionally, after the node starts, a small amount of space is needed on the operating system disk to create users.
+The node also needs a small amount of space on the OS disk to create users after it starts.
-The size of the temporary drive depends on the VM size. One consideration when picking a VM size is to ensure the temporary drive has enough space for the planned workload.
+The size of the temporary drive depends on the VM size. One consideration when picking a VM size is to ensure that the temporary drive has enough space for the planned workload.
-- In the Azure portal when adding a pool, the full list of VM sizes can be displayed and there is a 'Resource Disk Size' column.-- The articles describing all VM sizes have tables with a 'Temp Storage' column; for example [Compute Optimized VM sizes](../virtual-machines/sizes-compute.md)
+When you add a pool in the Azure portal, you can display the full list of VM sizes, including a **Resource disk size** column. The articles that describe VM sizes have tables with a **Temp Storage** column. For more information, see [Compute optimized virtual machine sizes](/azure/virtual-machines/sizes-compute). For an example size table, see [Fsv2-series](/azure/virtual-machines/fsv2-series).
-For files written out by each task, a retention time can be specified for each task that determines how long the task files are kept before being automatically cleaned up. The retention time can be reduced to lower the storage requirements.
+You can specify a retention time for files written by each task. The retention time determines how long to keep the task files before automatically cleaning them up. You can reduce the retention time to lower storage requirements.
-If the temporary or operating system disk runs out of space (or is very close to running out of space), the node will move to [Unusable](/rest/api/batchservice/computenode/get#computenodestate) state and a node error will be reported saying that the disk is full.
+If the temporary or OS disk runs out of space, or is close to running out of space, the node moves to the `unusable` [computeNoteState](/rest/api/batchservice/computenode/get#computenodestate), and the node error says that the disk is full.
-If you're not sure what is taking up space on the node, try remoting to the node and investigating manually where the space has gone. You can also make use of the [Batch List Files API](/rest/api/batchservice/file/listfromcomputenode) to examine files in Batch managed folders (for example, task outputs). Note that this API only lists files in the Batch managed directories. If your tasks created files elsewhere, you won't see them.
+If you're not sure what's taking up space on the node, try remote connecting to the node and investigating manually. You can also use the [File - List From Compute Node](/rest/api/batchservice/file/listfromcomputenode) API to examine files, for example task outputs, in Batch managed folders. This API only lists files in the Batch managed directories. If your tasks created files elsewhere, this API doesn't show them.
-Make sure that any data you need has been retrieved from the node or uploaded to a durable store, then delete data as needed to free up space.
+After you make sure to retrieve any data you need from the node or upload it to a durable store, you can delete data as needed to free up space.
-You can delete old completed jobs or old completed tasks whose task data is still on the nodes. Look in the [RecentTasks collection](/rest/api/batchservice/computenode/get#taskinformation) on the node, or at the [files on the node](/rest/api/batchservice/file/listfromcomputenode). Deleting a job will delete all the tasks in the job; deleting the tasks in the job will trigger data in the task directories on the node to be deleted, thus freeing up space. Once you've freed up enough space, reboot the node and it should move out of "Unusable" state and into "Idle" again.
+You can delete old completed jobs or tasks whose task data is still on the nodes. Look in the `recentTasks` collection in the [taskInformation](/rest/api/batchservice/computenode/get#taskinformation) on the node, or use the [File - List From Compute Node](/rest/api/batchservice/file/listfromcomputenode) API. Deleting a job deletes all the tasks in the job. Deleting the tasks in the job triggers deletion of data in the task directories on the nodes, and frees up space. Once you've freed up enough space, reboot the node. The node should move out of `unusable` state and into `idle` again.
-To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools, you can remove a node from the pool using the [remove nodes API](/rest/api/batchservice/pool/removenodes). Then, you can grow the pool again to replace the bad node with a fresh one. For [CloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration) pools, you can re-image the node via the [Batch re-image API](/rest/api/batchservice/computenode/reimage). This will clean the entire disk. Re-image is not currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
+To recover an unusable node in [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools, you can remove the node from the pool by using the [Pool - Remove Nodes](/rest/api/batchservice/pool/removenodes) API. Then you can grow the pool again to replace the bad node with a fresh one. For [CloudServiceConfiguration](/rest/api/batchservice/pool/add#cloudserviceconfiguration) pools, you can reimage the node by using the [Compute Node - Reimage](/rest/api/batchservice/computenode/reimage) API to clean the entire disk. Reimage isn't currently supported for [VirtualMachineConfiguration](/rest/api/batchservice/pool/add#virtualmachineconfiguration) pools.
## Next steps
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
Title: Run workloads on cost-effective Spot VMs
+ Title: Run Batch workloads on cost-effective Spot VMs
description: Learn how to provision Spot VMs to reduce the cost of Azure Batch workloads. Previously updated : 04/06/2023 Last updated : 04/11/2023
-# Use Spot VMs with Batch
+# Use Spot VMs with Batch workloads
Azure Batch offers Spot virtual machines (VMs) to reduce the cost of Batch workloads. Spot VMs make new types of Batch workloads possible by enabling a large amount of compute power to be used for a low cost. Spot VMs take advantage of surplus capacity in Azure. When you specify Spot VMs in your pools, Azure Batch can use this surplus, when available.
-The tradeoff for using Spot VMs is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, Spot VMs are most suitable for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
+The tradeoff for using Spot VMs is that those VMs might not always be available, or they might get preempted at any time, depending on available capacity. For this reason, Spot VMs are most suitable for batch and asynchronous processing workloads where the job completion time is flexible and the work is distributed across many VMs.
-Spot VMs are offered at a reduced price compared with dedicated VMs. For pricing details, see [Batch Pricing](https://azure.microsoft.com/pricing/details/batch/).
+Spot VMs are offered at a reduced price compared with dedicated VMs. To learn more about pricing, see [Batch pricing](https://azure.microsoft.com/pricing/details/batch/).
## Differences between Spot and low-priority VMs
-Batch offers two types of low-cost pre-emptible VMs:
+Batch offers two types of low-cost preemptible VMs:
- [Spot VMs](../virtual-machines/spot-vms.md), a modern Azure-wide offering also available as single-instance VMs or Virtual Machine Scale Sets. - Low-priority VMs, a legacy offering only available through Azure Batch.
-The type of node you get depends on your Batch account's pool allocation mode, which is settable during account creation. Batch accounts that use the **user subscription** pool allocation mode always get Spot VMs. Batch accounts that use the **Batch managed** pool allocation mode always get low-priority VMs.
+The type of node you get depends on your Batch account's pool allocation mode, which can be set during account creation. Batch accounts that use the **user subscription** pool allocation mode always get Spot VMs. Batch accounts that use the **Batch managed** pool allocation mode always get low-priority VMs.
> [!WARNING]
-> Support for low-priority VMs will be retired after **30 September 2025**. Please
-> [migrate to using Spot VMs in Batch](low-priority-vms-retirement-migration-guide.md) before then.
+> Low-priority VMs will be retired after **30 September 2025**. Please [migrate to Spot VMs in Batch](low-priority-vms-retirement-migration-guide.md) before then.
Azure Spot VMs and Batch low-priority VMs are similar but have a few differences in behavior.
Azure Spot VMs and Batch low-priority VMs are similar but have a few differences
|-|-|-| | **Supported Batch accounts** | User-subscription Batch accounts | Batch-managed Batch accounts | | **Supported Batch pool configurations** | Virtual Machine Configuration | Virtual Machine Configuration and Cloud Service Configuration (deprecated) |
-| **Available regions** | All regions supporting [Spot VMs](../virtual-machines/spot-vms.md) | All regions except Microsoft Azure China 21Vianet |
-| **Customer eligibility** | Not available to some subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations) | Available for all Batch customers |
+| **Available regions** | All regions that support [Spot VMs](../virtual-machines/spot-vms.md) | All regions except Microsoft Azure China 21Vianet |
+| **Customer eligibility** | Not available for some subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations). | Available for all Batch customers |
| **Possible reasons for eviction** | Capacity | Capacity | | **Pricing Model** | Variable discounts relative to standard VM prices | Fixed discounts relative to standard VM prices | | **Quota model** | Subject to core quotas on your subscription | Subject to core quotas on your Batch account |
Azure Spot VMs and Batch low-priority VMs are similar but have a few differences
Azure Batch provides several capabilities that make it easy to consume and benefit from Spot VMs: -- Batch pools can contain both dedicated VMs and Spot VMs. The number of each type of VM can be specified when a pool is created, or changed at any time for an existing pool, using the explicit resize operation or using autoscale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use Spot VMs to run jobs as cheaply as possible, but spin up dedicated VMs if the capacity drops below a minimum threshold, to keep jobs running.
+- Batch pools can contain both dedicated VMs and Spot VMs. The number of each type of VM can be specified when a pool is created, or changed at any time for an existing pool, by using the explicit resize operation or by using autoscale. Job and task submission can remain unchanged, regardless of the VM types in the pool. You can also configure a pool to completely use Spot VMs to run jobs as cheaply as possible, but spin up dedicated VMs if the capacity drops below a minimum threshold, to keep jobs running.
- Batch pools automatically seek the target number of Spot VMs. If VMs are preempted or unavailable, Batch attempts to replace the lost capacity and return to the target. - When tasks are interrupted, Batch detects and automatically requeues tasks to run again. - Spot VMs have a separate vCPU quota that differs from the one for dedicated VMs. The quota for Spot VMs is higher than the quota for dedicated VMs, because Spot VMs cost less. For more information, see [Batch service quotas and limits](batch-quota-limit.md#resource-quotas). ## Considerations and use cases
-Many Batch workloads are a good fit for Spot VMs. Consider using them when jobs are broken into many parallel tasks, or when you have many jobs that are scaled out and distributed across many VMs.
+Many Batch workloads are a good fit for Spot VMs. Consider using Spot VMs when jobs are broken into many parallel tasks, or when you have many jobs that are scaled out and distributed across many VMs.
-Some examples of batch processing use cases well suited to use Spot VMs are:
+Some examples of batch processing use cases that are well suited for Spot VMs are:
- **Development and testing**: In particular, if large-scale solutions are being developed, significant savings can be realized. All types of testing can benefit, but large-scale load testing and regression testing are great uses. - **Supplementing on-demand capacity**: Spot VMs can be used to supplement regular dedicated VMs. When available, jobs can scale and therefore complete quicker for lower cost; when not available, the baseline of dedicated VMs remains available.-- **Flexible job execution time**: If there's flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated; however, with the addition of Spot VMs jobs frequently run faster and for a lower cost.
+- **Flexible job execution time**: If there's flexibility in the time jobs have to complete, then potential drops in capacity can be tolerated. However, with the addition of Spot VMs, jobs frequently run faster and for a lower cost.
Batch pools can be configured to use Spot VMs in a few ways: - A pool can use only Spot VMs. In this case, Batch recovers any preempted capacity when available. This configuration is the cheapest way to execute jobs. - Spot VMs can be used with a fixed baseline of dedicated VMs. The fixed number of dedicated VMs ensures there's always some capacity to keep a job progressing.-- A pool can use a dynamic mix of dedicated and Spot VMs, so that the cheaper Spot VMs are solely used when available, but the full-priced dedicated VMs are scaled up when required. This configuration keeps a minimum amount of capacity available to keep the jobs progressing.
+- A pool can use a dynamic mix of dedicated and Spot VMs, so that the cheaper Spot VMs are solely used when available, but the full-priced dedicated VMs scale up when required. This configuration keeps a minimum amount of capacity available to keep jobs progressing.
Keep in mind the following practices when planning your use of Spot VMs: -- To maximize use of surplus capacity in Azure, suitable jobs can scale out.-- Occasionally VMs may not be available or are preempted, which results in reduced capacity for jobs and may lead to task interruption and reruns.-- Tasks with shorter execution times tend to work best with Spot VMs. Jobs with longer tasks may be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, this impact may be reduced.-- Long-running MPI jobs that utilize multiple VMs aren't well suited to use Spot VMs, because one preempted VM can lead to the whole job having to run again.
+- To maximize the use of surplus capacity in Azure, suitable jobs can scale out.
+- Occasionally, VMs might not be available or are preempted, which results in reduced capacity for jobs and could lead to task interruption and reruns.
+- Tasks with shorter execution times tend to work best with Spot VMs. Jobs with longer tasks might be impacted more if interrupted. If long-running tasks implement checkpointing to save progress as they execute, this impact might be reduced.
+- Long-running MPI jobs that utilize multiple VMs aren't well suited for Spot VMs, because one preempted VM can lead to the whole job having to run again.
- Spot nodes may be marked as unusable if [network security group (NSG) rules](batch-virtual-network.md#general-virtual-network-requirements) are configured incorrectly. ## Create and manage pools with Spot VMs A Batch pool can contain both dedicated and Spot VMs (also referred to as compute nodes). You can set the target number of compute nodes for both dedicated and Spot VMs. The target number of nodes specifies the number of VMs you want to have in the pool.
-For example, to create a pool using Azure virtual machines (in this case Linux VMs) with a target of 5 dedicated VMs and 20 Spot VMs:
+The following example creates a pool using Azure virtual machines, in this case Linux VMs, with a target of 5 dedicated VMs and 20 Spot VMs:
```csharp ImageReference imageRef = new ImageReference(
int? numDedicated = pool1.CurrentDedicatedComputeNodes;
int? numLowPri = pool1.CurrentLowPriorityComputeNodes; ```
-Pool nodes have a property to indicate if the node is a dedicated or
-Spot VM:
+Pool nodes have a property to indicate if the node is a dedicated or Spot VM:
```csharp bool? isNodeDedicated = poolNode.IsDedicated; ```
-VMs may occasionally be preempted. When preemption happens, tasks that were running on the preempted node VMs are requeued and run again.
+Spot VMs might occasionally be preempted. When preemption happens, tasks that were running on the preempted node VMs are requeued and run again when capacity returns.
For Virtual Machine Configuration pools, Batch also performs the following behaviors: -- The preempted VMs have their state updated to **Preempted**.
+- The preempted VMs have their state updated to *Preempted*.
- The VM is effectively deleted, leading to loss of any data stored locally on the VM.-- A list nodes operation on the pool will still return the preempted nodes.-- The pool continually attempts to reach the target number of Spot nodes available. When replacement capacity is found, the nodes keep their IDs, but are reinitialized, going through **Creating** and **Starting** states before they're available for task scheduling.
+- A list nodes operation on the pool still returns the preempted nodes.
+- The pool continually attempts to reach the target number of Spot nodes available. When replacement capacity is found, the nodes keep their IDs, but are reinitialized, going through *Creating* and *Starting* states before they're available for task scheduling.
- Preemption counts are available as a metric in the Azure portal. ## Scale pools containing Spot VMs
-As with pools solely consisting of dedicated VMs, it's possible to scale a pool containing Spot VMs by calling the Resize method or by using autoscale.
+As with pools solely consisting of dedicated VMs, it's possible to scale a pool containing Spot VMs by calling the `Resize` method or by using autoscale.
-The pool resize operation takes a second optional parameter that updates the value of **targetLowPriorityNodes**:
+The pool resize operation takes a second optional parameter that updates the value of `targetLowPriorityNodes`:
```csharp pool.Resize(targetDedicatedComputeNodes: 0, targetLowPriorityComputeNodes: 25);
pool.Resize(targetDedicatedComputeNodes: 0, targetLowPriorityComputeNodes: 25);
The pool autoscale formula supports Spot VMs as follows: -- You can get or set the value of the service-defined variable **$TargetLowPriorityNodes**.-- You can get the value of the service-defined variable **$CurrentLowPriorityNodes**.-- You can get the value of the service-defined variable **$PreemptedNodeCount**. This variable returns the number of nodes in the preempted state and allows you to scale up or down the number of dedicated nodes, depending on the number of preempted nodes that are unavailable.
+- You can get or set the value of the service-defined variable `$TargetLowPriorityNodes`.
+- You can get the value of the service-defined variable `$CurrentLowPriorityNodes`.
+- You can get the value of the service-defined variable `$PreemptedNodeCount`. This variable returns the number of nodes in the preempted state and allows you to scale up or down the number of dedicated nodes, depending on the number of preempted nodes that are unavailable.
## Configure jobs and tasks Jobs and tasks may require some extra configuration for Spot nodes: -- The JobManagerTask property of a job has an **AllowLowPriorityNode** property. When this property is true, the job manager task can be scheduled on either a dedicated or Spot node. If it's false, the job manager task is scheduled to a dedicated node only.
+- The `JobManagerTask` property of a job has an `AllowLowPriorityNode` property. When this property is true, the job manager task can be scheduled on either a dedicated or Spot node. If it's false, the job manager task is scheduled to a dedicated node only.
- The `AZ_BATCH_NODE_IS_DEDICATED` [environment variable](batch-compute-node-environment-variables.md) is available to a task application so that it can determine whether it's running on a Spot or on a dedicated node. ## View metrics for Spot VMs
New metrics are available in the [Azure portal](https://portal.azure.com) for Sp
- Low-Priority Core Count - Preempted Node Count
-To view these metrics in the Azure portal
+To view these metrics in the Azure portal:
1. Navigate to your Batch account in the Azure portal. 2. Select **Metrics** from the **Monitoring** section.
To view these metrics in the Azure portal
- Spot VMs in Batch don't support setting a max price and don't support price-based evictions. They can only be evicted for capacity reasons. - Spot VMs are only available for Virtual Machine Configuration pools and not for Cloud Service Configuration pools, which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/).-- Spot VMs aren't available for some clouds, VM sizes, and subscription offer types. See more about [Spot limitations](../virtual-machines/spot-vms.md#limitations).-- Currently, [Ephemeral OS disks](create-pool-ephemeral-os-disk.md) aren't supported with Spot VMs due to the service managed
-eviction policy of Stop-Deallocate.
+- Spot VMs aren't available for some clouds, VM sizes, and subscription offer types. See more about [Spot VM limitations](../virtual-machines/spot-vms.md#limitations).
+- Currently, [ephemeral OS disks](create-pool-ephemeral-os-disk.md) aren't supported with Spot VMs due to the service-managed eviction policy of *Stop-Deallocate*.
## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with **Cloud Services Configuration** pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/)), plan to migrate to [**Virtual Machine configuration** pools](nodes-and-pools.md#configurations) instead.
+- Start to plan the move from low-priority VMs to Spot VMs. If you use low-priority VMs with *Cloud Services Configuration* pools (which are [deprecated](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024)), plan to migrate to [Virtual Machine Configuration pools](nodes-and-pools.md#configurations) instead.
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/nodes-and-pools.md
Title: Nodes and pools in Azure Batch description: Learn about compute nodes and pools and how they are used in an Azure Batch workflow from a development standpoint. Previously updated : 12/13/2021 Last updated : 04/11/2023 # Nodes and pools in Azure Batch
There are two types of pool configurations available in Batch.
The **Virtual Machine Configuration** specifies that the pool is composed of Azure virtual machines. These VMs may be created from either Linux or Windows images.
+> [!IMPORTANT]
+> Currently, Batch does not support [Trusted Launch VMs](../virtual-machines/trusted-launch.md).
+ The [Batch node agent](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) is a program that runs on each node in the pool and provides the command-and-control interface between the node and the Batch service. There are different implementations of the node agent, known as SKUs, for different operating systems. When you create a pool based on the Virtual Machine Configuration, you must specify not only the size of the nodes and the source of the images used to create them, but also the **virtual machine image reference** and the Batch **node agent SKU** to be installed on the nodes. For more information about specifying these pool properties, see [Provision Linux compute nodes in Azure Batch pools](batch-linux-nodes.md). You can optionally attach one or more empty data disks to pool VMs created from Marketplace images, or include data disks in custom images used to create the VMs. When including data disks, you need to mount and format the disks from within a VM to use them. ### Cloud Services Configuration
When you create a pool, you need to select the appropriate **nodeAgentSkuId**, d
To learn how to create a pool with custom images, see [Use the Azure Compute Gallery to create a custom pool](batch-sig-images.md).
-Alternatively, you can create a custom pool of virtual machines using a [managed image](batch-custom-images.md) resource. For information about preparing custom Linux images from Azure VMs, see [How to create an image of a virtual machine or VHD](../virtual-machines/linux/capture-image.md). For information about preparing custom Windows images from Azure VMs, see [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md).
- ### Container support in Virtual Machine pools When creating a Virtual Machine Configuration pool using the Batch APIs, you can set up the pool to run tasks in Docker containers. Currently, you must create the pool using an image that supports Docker containers. Use the Windows Server 2016 Datacenter with Containers image from the Azure Marketplace, or supply a custom VM image that includes Docker Community Edition or Enterprise Edition and any required drivers. The pool settings must include a [container configuration](/rest/api/batchservice/pool/add) that copies container images to the VMs when the pool is created. Tasks that run on the pool can then reference the container images and container run options.
batch Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-bicep.md
Title: Azure Quickstart - Create a Batch account - Bicep file description: This quickstart shows how to create a Batch account by using a Bicep file. Previously updated : 03/22/2022 Last updated : 04/11/2023 -+ tags: azure-resource-manager, bicep
cdn Create Profile Endpoint Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-terraform.md
+
+ Title: 'Quickstart: Create an Azure CDN profile and endpoint using Terraform'
+
+description: 'In this article, you create an Azure CDN profile and endpoint using Terraform'
+++ Last updated : 4/12/2023+++++
+# Quickstart: Create an Azure CDN profile and endpoint using Terraform
+
+This article shows how to use Terraform to create an [Azure CDN profile and endpoint](/azure/cdn/cdn-overview) using [Terraform](/azure/developer/terraform/quickstart-configure).
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random string for the CDN endpoint name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure CDN profile using [azurerm_cdn_profile](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_profile)
+> * Create an Azure CDN endpoint using [azurerm_cdn_endpoint](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cdn_endpoint)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-cdn-with-custom-origin). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/blob/master/quickstart/101-cdn-with-custom-origin/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cdn-with-custom-origin/main.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cdn-with-custom-origin/outputs.tf)]
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cdn-with-custom-origin/providers.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cdn-with-custom-origin/variables.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name in which the Azure CDN profile and endpoint were created.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the CDN profile name.
+
+ ```console
+ cdn_profile_name=$(terraform output -raw cdn_profile_name)
+ ```
+
+1. Get the CDN endpoint name.
+
+ ```console
+ cdn_endpoint_endpoint_name=$(terraform output -raw cdn_endpoint_endpoint_name)
+ ```
+
+1. Run [az cdn custom-domain show](/cli/azure/cdn/custom-domain#az-cdn-custom-domain-show) to show details of the custom domain you created in this article.
+
+ ```azurecli
+ az cdn endpoint show --resource-group $resource_group_name \
+ --profile-name $cdn_profile_name \
+ --name $cdn_endpoint_endpoint_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name in which the Azure CDN profile and endpoint were created.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the CDN profile name.
+
+ ```console
+ $cdn_profile_name=$(terraform output -raw cdn_profile_name)
+ ```
+
+1. Get the CDN endpoint name.
+
+ ```console
+ $cdn_endpoint_endpoint_name=$(terraform output -raw cdn_endpoint_endpoint_name)
+ ```
+
+1. Run [Get-AzCdnEndpoint](/powershell/module/az.cdn/get-azcdnendpoint) to show details of the custom domain you created in this article.
+
+ ```console
+ Get-AzCdnEndpoint -ResourceGroupName $resource_group_name `
+ -ProfileName $cdn_profile_name `
+ -Name $cdn_endpoint_endpoint_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Use CDN to serve static content from a web app](cdn-add-to-web-app.md)
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 3/28/2023 Last updated : 4/12/2023
# Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
+## April 2023 Guest OS
+
+>[!NOTE]
+
+>The April Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the April Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 23-04 | [5025228] | Latest Cumulative Update(LCU) | 5.80 | Apr 11, 2023 |
+| Rel 23-04 | [5022835] | IE Cumulative Updates | 2.136, 3.123, 4.116 | Feb 14, 2023 |
+| Rel 23-04 | [5025230] | Latest Cumulative Update(LCU) | 7.24 | Apr 11, 2023 |
+| Rel 23-04 | [5025229] | Latest Cumulative Update(LCU) | 6.56 | Apr 11, 2023 |
+| Rel 23-04 | [5022523] | .NET Framework 3.5 Security and Quality Rollup LKG  | 2.136 | Feb 14, 2023 |
+| Rel 23-04 | [5022515] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 2.136 | Feb 14, 2023 |
+| Rel 23-04 | [5022525] | .NET Framework 3.5 Security and Quality Rollup LKG  | 4.116 | Feb 14, 2023 |
+| Rel 23-04 | [5022513] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 4.116 | Feb 14, 2023 |
+| Rel 23-04 | [5022574] | .NET Framework 3.5 Security       and Quality Rollup LKG  | 3.123 | Feb 14, 2023 |
+| Rel 23-04 | [5022512] | .NET Framework 4.6.2 Security and Quality Rollup LKG  | 3.123 | Feb 14, 2023 |
+| Rel 23-04 | [5022511] | . NET Framework 4.7.2 Cumulative Update LKG  | 6.56 | Feb 14, 2023 |
+| Rel 23-04 | [5022507] | .NET Framework 4.8 Security and Quality Rollup LKG  | 7.24 | Feb 14, 2023 |
+| Rel 23-04 | [5025279] | Monthly Rollup  | 2.136 | Apr 11, 2023 |
+| Rel 23-04 | [5025287] | Monthly Rollup  | 3.123 | Apr 11, 2023 |
+| Rel 23-04 | [5025285] | Monthly Rollup  | 4.116 | Apr 11, 2023 |
+| Rel 23-04 | [5023791] | Servicing Stack Update LKG  | 3.123 | Mar 14, 2023 |
+| Rel 23-04 | [5023790] | Servicing Stack Update LKG  | 4.116 | Mar 14, 2022 |
+| Rel 23-04 | [4578013] | OOB Standalone Security Update  | 4.116 | Aug 19, 2020 |
+| Rel 23-04 | [5023788] | Servicing Stack Update LKG  | 5.80 | Mar 14, 2023 |
+| Rel 23-04 | [5017397] | Servicing Stack Update LKG  | 2.136 | Sep 13, 2022 |
+| Rel 23-04 | [4494175] | Microcode  | 5.80 | Sep 1, 2020 |
+| Rel 23-04 | [4494174] | Microcode  | 6.56 | Sep 1, 2020 |
+| Rel 23-04 | 5025314 | Servicing Stack Update  | 7.24 | |
+
+[5025228]: https://support.microsoft.com/kb/5025228
+[5022835]: https://support.microsoft.com/kb/5022835
+[5025230]: https://support.microsoft.com/kb/5025230
+[5025229]: https://support.microsoft.com/kb/5025229
+[5022523]: https://support.microsoft.com/kb/5022523
+[5022515]: https://support.microsoft.com/kb/5022515
+[5022525]: https://support.microsoft.com/kb/5022525
+[5022513]: https://support.microsoft.com/kb/5022513
+[5022574]: https://support.microsoft.com/kb/5022574
+[5022512]: https://support.microsoft.com/kb/5022512
+[5022511]: https://support.microsoft.com/kb/5022511
+[5022507]: https://support.microsoft.com/kb/5022507
+[5025279]: https://support.microsoft.com/kb/5025279
+[5025287]: https://support.microsoft.com/kb/5025287
+[5025285]: https://support.microsoft.com/kb/5025285
+[5023791]: https://support.microsoft.com/kb/5023791
+[5023790]: https://support.microsoft.com/kb/5023790
+[4578013]: https://support.microsoft.com/kb/4578013
+[5023788]: https://support.microsoft.com/kb/5023788
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
+ ## March 2023 Guest OS
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 23-03 | [5017397] | Servicing Stack Update LKG  | [2.135] | Sep 13, 2022 | | Rel 23-03 | [4494175] | Microcode  | [5.79] | Sep 1, 2020 | | Rel 23-03 | [4494174] | Microcode  | [6.55] | Sep 1, 2020 |
-| Rel 23-03 | [5023793] | Servicing Stack Update  | [7.23] | |
+| Rel 23-03 | 5023793 | Servicing Stack Update  | [7.23] | |
[5023697]: https://support.microsoft.com/kb/5023697 [5022835]: https://support.microsoft.com/kb/5022835
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
your behalf in the supported region that's nearest to you:
- Storage account: `cs<uniqueGuid>` - fileshare: `cs-<user>-<domain>-com-<uniqueGuid>`
-![Screenshot of choosing the subscription for your storage account][09]
+![Screenshot of choosing the subscription for your storage account.][06]
The fileshare mounts as `clouddrive` in your `$HOME` directory. This is a one-time action, and the fileshare mounts automatically in subsequent sessions.
and zone-redundant storage (ZRS) accounts.
> [!NOTE] > Using GRS or ZRS storage accounts are recommended for additional resiliency for your backing file > share. Which type of redundancy depends on your goals and price preference.
-> [Learn more about replication options for Azure Storage accounts][04].
+> [Learn more about replication options for Azure Storage accounts][03].
-![Screenshot of configuring your storage account][08]
+![Screenshot of configuring your storage account.][05]
## Securing storage access
of their fileshare.
Storage accounts that you create in Cloud Shell are tagged with `ms-resource-usage:azure-cloud-shell`. If you want to disallow users from creating storage accounts
-in Cloud Shell, create an [Azure resource policy for tags][03] that is triggered by this specific
+in Cloud Shell, create an [Azure resource policy for tags][02] that is triggered by this specific
tag. ## How Cloud Shell storage works
Cloud Shell persists files through both of the following methods:
In Cloud Shell, you can run a command called `clouddrive`, which enables you to manually update the fileshare that's mounted to Cloud Shell.
-![Screenshot of running the clouddrive command in bash][10]
+![Screenshot of running the clouddrive command in bash.][07]
### List `clouddrive`
clouddrive mount -s mySubscription -g myRG -n storageAccountName -f fileShareNam
To view more details, run `clouddrive mount -h`, as shown here:
-![Screenshot of running the clouddrive mount command in bash][11]
+![Screenshot of running the clouddrive mount command in bash.][12]
### Unmount clouddrive
The unmounted fileshare continues to exist until you manually delete it. After u
Shell no longer searches for this fileshare in subsequent sessions. To view more details, run `clouddrive unmount -h`, as shown here:
-![Screenshot of running the clouddrive unmount command in bash][12]
+![Screenshot of running the clouddrive unmount command in bash.][13]
> [!WARNING] > Although running this command doesn't delete any resources, manually deleting a resource group,
Shell no longer searches for this fileshare in subsequent sessions. To view more
The `Get-CloudDrive` cmdlet retrieves the Azure fileshare information currently mounted by the `clouddrive` in Cloud Shell.
-![Screenshot of running the Get-CloudDrive command in PowerShell][07]
+![Screenshot of running the Get-CloudDrive command in PowerShell.][11]
### Unmount `clouddrive`
Dismounting the `clouddrive` terminates the current session.
If the Azure fileshare has been removed, you'll be prompted to create and mount a new Azure fileshare in the next session.
-![Screenshot of running the Dismount-CloudDrive command in PowerShell][06]
+![Screenshot of running the Dismount-CloudDrive command in PowerShell.][08]
## Transfer local files to Cloud Shell
The `clouddrive` directory syncs with the Azure portal storage blade. Use this b
local files to or from your file share. Updating files from within Cloud Shell is reflected in the file storage GUI when you refresh the blade.
-### Download files
+### Download files from the Azure portal
+
+![Screenshot listing local files in the Azure portal.][09]
-![Screenshot listing local files in the Azure portal][13]
1. In the Azure portal, go to the mounted file share.
-2. Select the target file.
-3. Select the **Download** button.
+1. Select the target file.
+1. Select the **Download** button.
+
+### Download files in Azure Cloud Shell
+
+1. In an Azure Cloud Shell session, select the **Upload/Download files** icon and select the
+ **Download** option.
+1. In the **Download a file** dialog, enter the path to the file you want to download.
+
+ ![Screenshot of the download dialog box in Cloud Shell.][10]
+
+ You can only download files located under your `$HOME` folder.
+1. Click the **Download** button.
### Upload files
-![Screenshot showing how to upload files in the Azure portal][14]
+![Screenshot showing how to upload files in the Azure portal.][14]
+ 1. Go to your mounted file share.
-2. Select the **Upload** button.
-3. Select the file or files that you want to upload.
-4. Confirm the upload.
+1. Select the **Upload** button.
+1. Select the file or files that you want to upload.
+1. Confirm the upload.
You should now see the files that are accessible in your `clouddrive` directory in Cloud Shell.
You should now see the files that are accessible in your `clouddrive` directory
> If you need to define a function in a file and call it from the PowerShell cmdlets, then the > dot operator must be included. For example: `. .\MyFunctions.ps1`
+### Upload files in Azure Cloud Shell
+
+1. In an Azure Cloud Shell session, select the **Upload/Download files** icon and select the
+ **Upload** option.
+1. Your browser will open a file dialog. Select the file you want to upload then click the **Open**
+ button.
+
+The file is uploaded to the root of your `$HOME` folder. You can move the file after it's uploaded.
+ ## Next steps - [Cloud Shell Quickstart][15]-- [Learn about Microsoft Azure Files storage][05]-- [Learn about storage tags][02]
+- [Learn about Microsoft Azure Files storage][04]
+- [Learn about storage tags][01]
<!-- link references -->
-[01]: includes/cloud-shell-persisting-shell-storage-endblock.md
-[02]: ../azure-resource-manager/management/tag-resources.md
-[03]: ../governance/policy/samples/index.md
-[04]: ../storage/common/storage-redundancy.md
-[05]: ../storage/files/storage-files-introduction.md
-[06]: media/persisting-shell-storage/dismount-clouddrive.png
-[07]: media/persisting-shell-storage/get-clouddrive.png
-[08]: media/persisting-shell-storage/advanced-storage.png
-[09]: media/persisting-shell-storage/basic-storage.png
-[10]: media/persisting-shell-storage/clouddrive-h.png
-[11]: media/persisting-shell-storage/mount-h.png
-[12]: media/persisting-shell-storage/unmount-h.png
-[13]: media/persisting-shell-storage/download.png
-[14]: media/persisting-shell-storage/upload.png
+[01]: ../azure-resource-manager/management/tag-resources.md
+[02]: ../governance/policy/samples/index.md
+[03]: ../storage/common/storage-redundancy.md
+[04]: ../storage/files/storage-files-introduction.md
+[05]: media/persisting-shell-storage/advanced-storage.png
+[06]: media/persisting-shell-storage/basic-storage.png
+[07]: media/persisting-shell-storage/clouddrive-h.png
+[08]: media/persisting-shell-storage/dismount-clouddrive.png
+[09]: media/persisting-shell-storage/download-portal.png
+[10]: media/persisting-shell-storage/download-shell.png
+[11]: media/persisting-shell-storage/get-clouddrive.png
+[12]: media/persisting-shell-storage/mount-h.png
+[13]: media/persisting-shell-storage/unmount-h.png
+[14]: media/persisting-shell-storage/upload-portal.png
[15]: quickstart.md
cognitive-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-overview.md
The Microsoft Audio Stack also powers a wide range of Microsoft products:
## Speech SDK integration The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include:
-* **Realtime microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
+* **Real-time microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation. * **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)). * **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
cognitive-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-synthesis.md
The Batch synthesis API (Preview) can synthesize a large volume of text input (l
> [!IMPORTANT] > The Batch synthesis API is currently in public preview. Once it's generally available, the Long Audio API will be deprecated. For more information, see [Migrate to batch synthesis API](migrate-to-batch-synthesis.md).
-The batch synthesis API is asynchronous and doesn't return synthesized audio in real time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
+The batch synthesis API is asynchronous and doesn't return synthesized audio in real-time. You submit text files to be synthesized, poll for the status, and download the audio output when the status indicates success. The text inputs must be plain text or [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) text.
This diagram provides a high-level overview of the workflow.
cognitive-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md
To create a transcription, use the `spx batch transcription create` command. Con
Here's an example Speech CLI command that creates a transcription job: ```azurecli-interactive
-spx batch transcription create --api-version v3.1 --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
``` You should receive a response body in the following format:
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli" ```azurecli-interactive
-spx batch transcription create --api-version v3.1 --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
+spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d"
``` ::: zone-end
-To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide. A [custom model deployment endpoint](how-to-custom-speech-deploy-model.md) isn't needed for the batch transcription service.
+To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
+
+> [!TIP]
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
The following are aspects to consider when using captioning:
> > Try the [Azure Video Indexer](../../azure-video-indexer/video-indexer-overview.md) as a demonstration of how you can get captions for videos that you upload.
-Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
## Caption output format
Welcome to applied Mathematics course 201.
## Input audio to the Speech service
-For real time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
+For real-time captioning, use a microphone or audio input stream instead of file input. For examples of how to recognize speech from a microphone, see the [Speech to text quickstart](get-started-speech-to-text.md) and [How to recognize speech](how-to-recognize-speech.md) documentation. For more information about streaming, see [How to use the audio input stream](how-to-use-audio-input-streams.md).
For captioning of a prerecording, send file input to the Speech service. For more information, see [How to use compressed input audio](how-to-use-codec-compressed-audio-input-streams.md). ## Caption and speech synchronization
-You'll want to synchronize captions with the audio track, whether it's done in real time or with a prerecording.
+You'll want to synchronize captions with the audio track, whether it's done in real-time or with a prerecording.
The Speech service returns the offset and duration of the recognized speech.
Consider when to start displaying captions, and how many words to show at a time
For captioning of prerecorded speech or wherever latency isn't a concern, you could wait for the complete transcription of each utterance before displaying any words. Given the final offset and duration of each word in an utterance, you know when to show subsequent words at pace with the soundtrack.
-Real time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
+Real-time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
cognitive-services Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/conversation-transcription.md
See the real-time conversation transcription [quickstart](how-to-use-conversatio
## Use cases
-To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it's important to have transcription in real time. Conversation transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting, without a delay.
+To make meetings inclusive for everyone, such as participants who are deaf and hard of hearing, it's important to have transcription in real-time. Conversation transcription in real-time mode takes meeting audio and determines who is saying what, allowing all meeting participants to follow the transcript and participate in the meeting, without a delay.
Meeting participants can focus on the meeting and leave note-taking to conversation transcription. Participants can actively engage in the meeting and quickly follow up on next steps, using the transcript instead of taking notes and potentially missing something during the meeting.
Currently, conversation transcription supports [all speech-to-text languages](la
## Next steps > [!div class="nextstepaction"]
-> [Transcribe conversations in real time](how-to-use-conversation-transcription.md)
+> [Transcribe conversations in real-time](how-to-use-conversation-transcription.md)
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Here's an overview of the steps to create a custom neural voice in Speech Studio
1. [Test your voice](how-to-custom-voice-create-voice.md#test-your-voice-model). Prepare test scripts for your voice model that cover the different use cases for your apps. ItΓÇÖs a good idea to use scripts within and outside the training dataset, so you can test the quality more broadly for different content. 1. [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md) in your apps.
-You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
+You can tune, adjust, and use your custom voice, similarly as you would use a prebuilt neural voice. Convert text into speech in real-time, or generate audio content offline with text input. You can do this by using the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [Speech Studio](https://speech.microsoft.com/audiocontentcreation).
The style and the characteristics of the trained voice model depend on the style and the quality of the recordings from the voice talent used for training. However, you can make several adjustments by using [SSML (Speech Synthesis Markup Language)](./speech-synthesis-markup.md?tabs=csharp) when you make the API calls to your voice model to generate synthetic speech. SSML is the markup language used to communicate with the text-to-speech service to convert text into audio. The adjustments you can make include change of pitch, rate, intonation, and pronunciation correction. If the voice model is built with multiple styles, you can also use SSML to switch the styles.
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
# What is Custom Speech?
-With Custom Speech, you can evaluate and improve the Microsoft speech-to-text accuracy for your applications and products.
+With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech-to-text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
-Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
+Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
-> [!NOTE]
-> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
- ## How does it work? With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
Here's more information about the sequence of steps shown in the previous diagra
1. [Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data. 1. [Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech-to-text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required. 1. [Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
+ > [!NOTE]
+ > You pay for Custom Speech model usage and endpoint hosting, but you are not charged for training a model.
1. [Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+ > [!TIP]
+ > A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
## Next steps
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
The following [text-to-speech](text-to-speech.md) locales and voices are availab
| `it-IT` | Italian (Italy) | `it-IT-IsabellaNeural` (Female)<br/>`it-IT-DiegoNeural` (Male)| | `ja-JP` | Japanese (Japan) | `ja-JP-NanamiNeural` (Female)<br/>`ja-JP-KeitaNeural` (Male)| | `ko-KR` | Korean (Korea) | `ko-KR-SunHiNeural` (Female)<br/>`ko-KR-InJoonNeural` (Male)|
-| `pr-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
+| `pt-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
| `zh-CN` | Chinese (Mandarin, Simplified) | `zh-CN-XiaoxiaoNeural` (Female)<br/>`zh-CN-YunxiNeural` (Male)| ## Embedded speech configuration
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech-to-
## Next steps - [Read about text to speech on devices for disconnected and hybrid scenarios](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-now-available-on-devices-for-disconnected-and/ba-p/3716797)-- [Limited Access to embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context)
+- [Limited Access to embedded Speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/cognitive-services/speech-service/context/context)
cognitive-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md
It's not unusual that players in the same game session natively speak different
For an example, see the [Speech translation quickstart](get-started-speech-translation.md). > [!NOTE]
-> Besides the Speech service, you can also use the [Translator service](../translator/translator-overview.md). To execute text translation between supported source and target languages in real time see [Text translation](../translator/text-translation-overview.md).
+> Besides the Speech service, you can also use the [Translator service](../translator/translator-overview.md). To execute text translation between supported source and target languages in real-time see [Text translation](../translator/text-translation-overview.md).
## Next steps
cognitive-services How To Async Conversation Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-async-conversation-transcription.md
In this article, asynchronous Conversation Transcription is demonstrated using t
## Asynchronous vs. real-time + asynchronous
-With asynchronous transcription, you stream the conversation audio, but don't need a transcription returned in real time. Instead, after the audio is sent, use the `conversationId` of `Conversation` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteConversationTranscriptionResult`.
+With asynchronous transcription, you stream the conversation audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `conversationId` of `Conversation` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteConversationTranscriptionResult`.
-With real-time plus asynchronous, you get the transcription in real time, but also get the transcription by querying with the `conversationId` (similar to asynchronous scenario).
+With real-time plus asynchronous, you get the transcription in real-time, but also get the transcription by querying with the `conversationId` (similar to asynchronous scenario).
Two steps are required to accomplish asynchronous transcription. The first step is to upload the audio, choosing either asynchronous only or real-time plus asynchronous. The second step is to get the transcription results.
cognitive-services How To Audio Content Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-audio-content-creation.md
You can use the [Audio Content Creation](https://speech.microsoft.com/portal/aud
Build highly natural audio content for a variety of scenarios, such as audiobooks, news broadcasts, video narrations, and chat bots. With Audio Content Creation, you can efficiently fine-tune text-to-speech voices and design customized audio experiences.
-The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
+The tool is based on [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). It allows you to adjust text-to-speech output attributes in real-time or batch synthesis, such as voice characters, voice styles, speaking speed, pronunciation, and prosody.
- No-code approach: You can use the Audio Content Creation tool for text-to-speech synthesis without writing any code. The output audio might be the final deliverable that you want. For example, you can use the output audio for a podcast or a video narration. - Developer-friendly: You can listen to the output audio and adjust the SSML to improve speech synthesis. Then you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-basics.md) to integrate the SSML into your applications. For example, you can use the SSML for building a chat bot.
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
zone_pivot_groups: speech-studio-cli-rest
# Deploy a Custom Speech model
-In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
-> [!NOTE]
-> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> [!TIP]
+> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model.
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
zone_pivot_groups: speech-studio-cli-rest
In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released. > [!NOTE]
-> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md), but you are not charged for training a model.
Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
cognitive-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md
If you use [Speech-to-text](speech-to-text.md) and need to open a support case, you are often asked to provide a *Session ID* or *Transcription ID* of the problematic transcriptions to debug the issue. This article explains how to get these IDs. > [!NOTE]
-> * *Session ID* is used in [Online transcription](get-started-speech-to-text.md) and [Translation](speech-translation.md).
+> * *Session ID* is used in [real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md).
> * *Transcription ID* is used in [Batch transcription](batch-transcription.md).
-## Getting Session ID for Online transcription and Translation. (Speech SDK and REST API for short audio).
+## Getting Session ID
-[Online transcription](get-started-speech-to-text.md) and [Translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text-short.md).
+[Real-time speech to text](get-started-speech-to-text.md) and [speech translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text-short.md).
To get the Session ID, when using SDK you need to:
https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cogn
## Getting Transcription ID for Batch transcription
-[Batch transcription](batch-transcription.md) uses [Speech-to-text REST API](rest-speech-to-text.md).
+[Batch transcription API](batch-transcription.md) is a subset of the [Speech-to-text REST API](rest-speech-to-text.md).
-The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Transcriptions_Create](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).
+The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).
-The example below is the Response body of a `Create Transcription` request. GUID value `537216f8-0620-4a10-ae2d-00bdb423b36f` found in the first `self` element is the Transcription ID.
+The following is and example response body of a [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request. GUID value `537216f8-0620-4a10-ae2d-00bdb423b36f` found in the first `self` element is the Transcription ID.
```json {
- "self": "https://japaneast.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f",
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f",
"model": {
- "self": "https://japaneast.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/824bd685-2d45-424d-bb65-c3fe99e32927"
+ "self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/824bd685-2d45-424d-bb65-c3fe99e32927"
}, "links": {
- "files": "https://japaneast.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f/files"
+ "files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/transcriptions/537216f8-0620-4a10-ae2d-00bdb423b36f/files"
}, "properties": { "diarizationEnabled": false,
The example below is the Response body of a `Create Transcription` request. GUID
} ``` > [!NOTE]
-> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
+> Use the same technique to determine different IDs required for debugging issues related to [Custom Speech](custom-speech-overview.md), like uploading a dataset using [Datasets_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_Create) request.
> [!NOTE]
-> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
+> You can also see all existing transcriptions and their Transcription IDs for a given Speech resource by using [Transcriptions_Get](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Get) request.
cognitive-services How To Lower Speech Synthesis Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-lower-speech-synthesis-latency.md
In a service scenario, you can forward the audio chunks immediately to your clie
::: zone pivot="programming-language-csharp"
-You can use the [`PullAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), [`Synthesizing` event](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing), and [`AudioDateStream`](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+You can use the [`PullAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/dotnet/api/microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), [`Synthesizing` event](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing), and [`AudioDataStream`](/dotnet/api/microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-Taking `AudioDateStream` as an example:
+Taking `AudioDataStream` as an example:
```csharp using (var synthesizer = new SpeechSynthesizer(config, null as AudioConfig))
using (var synthesizer = new SpeechSynthesizer(config, null as AudioConfig))
::: zone pivot="programming-language-cpp"
-You can use the [`PullAudioOutputStream`](/cpp/cognitive-services/speech/audio-pullaudiooutputstream), [`PushAudioOutputStream`](/cpp/cognitive-services/speech/audio-pushaudiooutputstream), the [`Synthesizing` event](/cpp/cognitive-services/speech/speechsynthesizer#synthesizing), and [`AudioDateStream`](/cpp/cognitive-services/speech/audiodatastream) of the Speech SDK to enable streaming.
+You can use the [`PullAudioOutputStream`](/cpp/cognitive-services/speech/audio-pullaudiooutputstream), [`PushAudioOutputStream`](/cpp/cognitive-services/speech/audio-pushaudiooutputstream), the [`Synthesizing` event](/cpp/cognitive-services/speech/speechsynthesizer#synthesizing), and [`AudioDataStream`](/cpp/cognitive-services/speech/audiodatastream) of the Speech SDK to enable streaming.
-Taking `AudioDateStream` as an example:
+Taking `AudioDataStream` as an example:
```cpp auto synthesizer = SpeechSynthesizer::FromConfig(config, nullptr);
while ((filledSize = audioDataStream->ReadData(buffer, sizeof(buffer))) > 0)
::: zone pivot="programming-language-java"
-You can use the [`PullAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_Synthesizing), and [`AudioDateStream`](/java/api/com.microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+You can use the [`PullAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/java/api/com.microsoft.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer.synthesizing#com_microsoft_cognitiveservices_speech_SpeechSynthesizer_Synthesizing), and [`AudioDataStream`](/java/api/com.microsoft.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-Taking `AudioDateStream` as an example:
+Taking `AudioDataStream` as an example:
```java SpeechSynthesizer synthesizer = new SpeechSynthesizer(config, null);
while (filledSize > 0) {
::: zone pivot="programming-language-python"
-You can use the [`PullAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#synthesizing), and [`AudioDateStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
+You can use the [`PullAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pullaudiooutputstream), [`PushAudioOutputStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.pushaudiooutputstream), the [`Synthesizing` event](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#synthesizing), and [`AudioDataStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiodatastream) of the Speech SDK to enable streaming.
-Taking `AudioDateStream` as an example:
+Taking `AudioDataStream` as an example:
```python speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
while filled_size > 0:
You can use the [`SPXPullAudioOutputStream`](/objectivec/cognitive-services/speech/spxpullaudiooutputstream), [`SPXPushAudioOutputStream`](/objectivec/cognitive-services/speech/spxpushaudiooutputstream), the [`Synthesizing` event](/objectivec/cognitive-services/speech/spxspeechsynthesizer#addsynthesizingeventhandler), and [`SPXAudioDataStream`](/objectivec/cognitive-services/speech/spxaudiodatastream) of the Speech SDK to enable streaming.
-Taking `AudioDateStream` as an example:
+Taking `AudioDataStream` as an example:
```Objective-C SPXSpeechSynthesizer *synthesizer = [[SPXSpeechSynthesizer alloc] initWithSpeechConfiguration:speechConfig audioConfiguration:nil];
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
This table lists some of the key configuration parameters for pronunciation asse
| `ReferenceText` | The text that the pronunciation will be evaluated against. | | `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | | `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. |
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
| `ScenarioId` | A GUID indicating a customized point system. | You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
::: zone pivot="programming-language-cpp" ```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
``` ::: zone-end
auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJs
::: zone pivot="programming-language-java" ```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
``` ::: zone-end
PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAsses
::: zone pivot="programming-language-python" ```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}")
+pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}")
``` ::: zone-end
pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_s
::: zone pivot="programming-language-javascript" ```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\"}");
+var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}");
``` ::: zone-end
cognitive-services How To Use Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-audio-input-streams.md
Title: Speech SDK audio input stream concepts
description: An overview of the capabilities of the Speech SDK audio input stream API. -+ Previously updated : 06/13/2022- Last updated : 04/12/2023+ ms.devlang: csharp
The Speech SDK provides a way to stream audio into the recognizer as an alternative to microphone or file input.
-The following steps are required when you use audio input streams:
+This guide describes how to use audio input streams. It also describes some of the requirements and limitations of the audio input stream.
-- Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure Cognitive Services Speech service. Currently, only the following configuration is supported:
+See more examples of speech-to-text recognition with audio input stream on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs).
- Audio samples are:
+## Identify the format of the audio stream
- - PCM format (int-16)
- - One channel
- - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
- - Two-block aligned (16 bit including padding for a sample)
+Identify the format of the audio stream. The format must be supported by the Speech SDK and the Azure Cognitive Services Speech service.
- The corresponding code in the SDK to create the audio format looks like this example:
+Supported audio samples are:
- ```csharp
- byte channels = 1;
- byte bitsPerSample = 16;
- int samplesPerSecond = 16000; // or 8000
- var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
- ```
+ - PCM format (int-16)
+ - One channel
+ - 16 bits per sample, 8,000 or 16,000 samples per second (16,000 bytes or 32,000 bytes per second)
+ - Two-block aligned (16 bit including padding for a sample)
-- Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
+The corresponding code in the SDK to create the audio format looks like this example:
-- Create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
+```csharp
+byte channels = 1;
+byte bitsPerSample = 16;
+int samplesPerSecond = 16000; // or 8000
+var audioFormat = AudioStreamFormat.GetWaveFormatPCM(samplesPerSecond, bitsPerSample, channels);
+```
- ```csharp
- public class ContosoAudioStream : PullAudioInputStreamCallback {
- ContosoConfig config;
+Make sure that your code provides the RAW audio data according to these specifications. Also, make sure that 16-bit samples arrive in little-endian format. Signed samples are also supported. If your audio source data doesn't match the supported formats, the audio must be transcoded into the required format.
- public ContosoAudioStream(const ContosoConfig& config) {
- this.config = config;
- }
+## Create your own audio input stream class
- public int Read(byte[] buffer, uint size) {
- // Returns audio data to the caller.
- // E.g., return read(config.YYY, buffer, size);
- }
+You can create your own audio input stream class derived from `PullAudioInputStreamCallback`. Implement the `Read()` and `Close()` members. The exact function signature is language-dependent, but the code looks similar to this code sample:
- public void Close() {
- // Close and clean up resources.
- }
- };
- ```
+```csharp
+public class ContosoAudioStream : PullAudioInputStreamCallback {
+ ContosoConfig config;
-- Create an audio configuration based on your audio format and input stream. Pass in both your regular speech configuration and the audio input configuration when you create your recognizer. For example:
+ public ContosoAudioStream(const ContosoConfig& config) {
+ this.config = config;
+ }
- ```csharp
- var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(config), audioFormat);
+ public int Read(byte[] buffer, uint size) {
+ // Returns audio data to the caller.
+ // E.g., return read(config.YYY, buffer, size);
+ }
- var speechConfig = SpeechConfig.FromSubscription(...);
- var recognizer = new SpeechRecognizer(speechConfig, audioConfig);
+ public void Close() {
+ // Close and clean up resources.
+ }
+};
+```
- // Run stream through recognizer.
- var result = await recognizer.RecognizeOnceAsync();
+Create an audio configuration based on your audio format and input stream. Pass in both your regular speech configuration and the audio input configuration when you create your recognizer. For example:
+
+```csharp
+var audioConfig = AudioConfig.FromStreamInput(new ContosoAudioStream(config), audioFormat);
+
+var speechConfig = SpeechConfig.FromSubscription(...);
+var recognizer = new SpeechRecognizer(speechConfig, audioConfig);
+
+// Run stream through recognizer.
+var result = await recognizer.RecognizeOnceAsync();
+
+var text = result.GetText();
+```
- var text = result.GetText();
- ```
## Next steps - [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)-- [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)
+- [See how to recognize speech in C#](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=dotnet)
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
See the [Getting Started Guide for the Ingestion Client](https://github.com/Azur
The Ingestion Client works by connecting a dedicated [Azure storage](https://azure.microsoft.com/product-categories/storage/) account to custom [Azure Functions](https://azure.microsoft.com/services/functions/) in a serverless fashion to pass transcription requests to the service. The transcribed audio files land in the dedicated [Azure Storage container](https://azure.microsoft.com/product-categories/storage/). > [!IMPORTANT]
-> Pricing varies depending on the mode of operation (batch vs real time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
+> Pricing varies depending on the mode of operation (batch vs real-time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
Internally, the tool uses Speech and Language services, and follows best practices to handle scale-up, retries and failover. The following schematic describes the resources and connections.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
### Using Speech-to-text custom models > [!NOTE]
-> Language detection with custom models can be used in OnLine transcription only. Batch transcription supports language detection for base models.
+> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models.
::: zone pivot="programming-language-csharp" This sample shows how to use language detection with a custom endpoint. If the detected language is `en-US`, then the default model is used. If the detected language is `fr-FR`, then the custom model endpoint is used. For more information, see [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md).
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
To identify languages in [Batch transcription](batch-transcription.md), you need to use `languageIdentification` property in the body of your [transcription REST request](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create). The example in this section shows the usage of `languageIdentification` property with four candidate languages. > [!WARNING]
-> Batch transcription supports language identification for base models only. If both language identification and custom model usage are specified in the transcription request, the service will automatically fall back to the base models for the specified candidate languages. This may result in unexpected recognition results.
+> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
>
-> If your scenario requires both language identification and custom models, use [OnLine transcription](#using-speech-to-text-custom-models).
+> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#using-speech-to-text-custom-models) instead of batch transcription.
```json {
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
Speech feature summaries are provided below with links for more information.
### Speech-to-text
-Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in real time or asynchronously.
+Use [speech-to-text](speech-to-text.md) to transcribe audio into text, either in [real-time](#real-time-speech-to-text) or asynchronously with [batch transcription](#batch-transcription).
> [!TIP]
-> You can try speech-to-text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
+> You can try real-time speech-to-text in [Speech Studio](https://aka.ms/speechstudio/speechtotexttool) without signing up or writing any code.
-Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarisation to determine who said what and when. Get readable transcripts with automatic formatting and punctuation.
+Convert audio to text from a range of sources, including microphones, audio files, and blob storage. Use speaker diarization to determine who said what and when. Get readable transcripts with automatic formatting and punctuation.
The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, you can create and train [custom speech models](custom-speech-overview.md) with acoustic, language, and pronunciation data. Custom speech models are private and can offer a competitive advantage.
+### Real-time speech-to-text
+
+With [real-time speech-to-text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech-to-text for applications that need to transcribe audio in real-time such as:
+- Transcriptions, captions, or subtitles for live meetings
+- Contact center agent assist
+- Dictation
+- Voice agents
+- Pronunciation assessment
+
+### Batch transcription
+
+[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
+- Transcriptions, captions, or subtitles for pre-recorded audio
+- Contact center post-call analytics
+- Diarization
+ ### Text-to-speech With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Supported versions and locales | |--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 3.13.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
| Speech language identification | Detects the language spoken in audio files. | Latest: 1.11.0<sup>1</sup><br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.11.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 2.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
<sup>1</sup> The container is available in public preview. Containers in preview are still under development and don't meet Microsoft's stability and support requirements.
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
Speech-to-text has two REST APIs. Each API serves a different purpose, uses diff
The Speech-to-text REST APIs are: - [Speech-to-text REST API](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). -- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md), which is used for online transcription
+- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md), which is used for real-time speech to text.
Usage of the Speech-to-text REST API for short audio and the Text-to-speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
For the free (F0) pricing tier, see also the monthly allowances at the [pricing
The following sections provide you with a quick guide to the quotas and limits that apply to the Speech service.
-For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
+For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
### Speech-to-text quotas and limits per resource This section describes speech-to-text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
-#### Online transcription and speech translation
+#### Real-time speech to text and speech translation
-You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md).
+You can use real-time speech-to-text with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md).
> [!IMPORTANT]
-> These limits apply to concurrent speech-to-text online transcription requests and speech translation requests combined. For example, if you have 60 concurrent speech-to-text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests.
+> These limits apply to concurrent real-time speech-to-text requests and speech translation requests combined. For example, if you have 60 concurrent speech-to-text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests.
| Quota | Free (F0) | Standard (S0) | |--|--|--|
-| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). |
-| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). |
+| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
+| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). |
#### Batch transcription
Some of the Speech service quotas are adjustable. This section provides addition
The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable. -- Speech-to-text [concurrent request limit](#online-transcription-and-speech-translation) for base model endpoint and custom endpoint
+- Speech-to-text [concurrent request limit](#real-time-speech-to-text-and-speech-translation) for base model endpoint and custom endpoint
- Text-to-speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices-- Speech translation [concurrent request limit](#online-transcription-and-speech-translation)
+- Speech translation [concurrent request limit](#real-time-speech-to-text-and-speech-translation)
Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity.
To minimize issues related to throttling, it's a good idea to use the following
The next sections describe specific cases of adjusting quotas.
-### Speech-to-text: increase online transcription concurrent request limit
+### Speech-to-text: increase real-time speech-to-text concurrent request limit
-By default, the number of concurrent speech-to-text [online transcription requests and speech translation requests](#online-transcription-and-speech-translation) combined is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+By default, the number of concurrent real-time speech-to-text and speech translation [requests combined](#real-time-speech-to-text-and-speech-translation) is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
>[!NOTE] > Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately.
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
Previously updated : 06/13/2022 Last updated : 04/05/2023 keywords: speech to text, speech to text software
keywords: speech to text, speech to text software
# What is speech-to-text?
-In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services.
-
-Speech-to-text, also known as speech recognition, enables real-time or offline transcription of audio streams into text. For a full list of available speech-to-text languages, see [Language and voice support for the Speech service](language-support.md?tabs=stt).
+In this overview, you learn about the benefits and capabilities of the speech-to-text feature of the Speech service, which is part of Azure Cognitive Services. Speech-to-text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text.
> [!NOTE]
-> Microsoft uses the same recognition technology for Windows and Office products.
+> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+For a full list of available speech-to-text languages, see [Language and voice support](language-support.md?tabs=stt).
-## Get started
+## Real-time speech-to-text
-To get started, try the [speech-to-text quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text.md), and the [Speech CLI](spx-overview.md).
+With real-time speech-to-text, the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech-to-text for applications that need to transcribe audio in real-time such as:
+- Transcriptions, captions, or subtitles for live meetings
+- Contact center agent assist
+- Dictation
+- Voice agents
+- Pronunciation assessment
-In depth samples are available in the [Azure-Samples/cognitive-services-speech-sdk](https://aka.ms/csspeech/samples) repository on GitHub. There are samples for C# (including UWP, Unity, and Xamarin), C++, Java, JavaScript (including Browser and Node.js), Objective-C, Python, and Swift. Code samples for Go are available in the [Microsoft/cognitive-services-speech-sdk-go](https://github.com/Microsoft/cognitive-services-speech-sdk-go) repository on GitHub.
+Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
## Batch transcription
-Batch transcription is a set of [Speech-to-text REST API](rest-speech-to-text.md) operations that enable you to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. For more information on how to use the batch transcription API, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+Batch transcription is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
+- Transcriptions, captions, or subtitles for pre-recorded audio
+- Contact center post-call analytics
+- Diarization
+
+Batch transcription is available via:
+- [Speech-to-text REST API](rest-speech-to-text.md): To get started, see [How to use batch transcription](batch-transcription.md) and [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
+- The [Speech CLI](spx-overview.md) supports both real-time and batch transcription. For Speech CLI help with batch transcriptions, run the following command:
+ ```azurecli-interactive
+ spx help batch transcription
+ ```
## Custom Speech
-The Azure speech-to-text service analyzes audio in real-time or batch to transcribe the spoken word into text. Out of the box, speech to text utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This base model is pre-trained with dialects and phonetics representing a variety of common domains. The base model works well in most scenarios.
+With [Custom Speech](./custom-speech-overview.md), you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech-to-text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
+
+> [!TIP]
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+
+Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
-The base model may not be sufficient if the audio contains ambient noise or includes a lot of industry and domain-specific jargon. In these cases, building a custom speech model makes sense by training with additional data associated with that specific domain. You can create and train custom acoustic, language, and pronunciation models. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API](rest-speech-to-text.md).
+A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions. For more information, see [Custom Speech](./custom-speech-overview.md) and [Speech-to-text REST API](rest-speech-to-text.md).
Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt). ## Next steps - [Get started with speech-to-text](get-started-speech-to-text.md)-- [Get the Speech SDK](speech-sdk.md)
+- [Create a batch transcription](batch-transcription-create.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Here's more information about neural text-to-speech features in the Speech servi
* **Real-time speech synthesis**: Use the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md) to convert text-to-speech by using [prebuilt neural voices](language-support.md?tabs=tts) or [custom neural voices](custom-neural-voice.md).
-* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
+* **Asynchronous synthesis of long audio**: Use the [batch synthesis API](batch-synthesis.md) (Preview) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example, audio books or lectures). Unlike synthesis performed via the Speech SDK or speech-to-text REST API, responses aren't returned in real-time. The expectation is that requests are sent asynchronously, responses are polled for, and synthesized audio is downloaded when the service makes it available.
* **Prebuilt neural voices**: Microsoft neural text-to-speech capability uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis happen simultaneously, which results in more fluid and natural-sounding outputs. Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz. You can use neural voices to:
cognitive-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/create-manage-workspace.md
The person who created the workspace is the owner. Within **Workspace settings**
:::image type="content" source="../media/how-to/manage-workspace-settings-3.png" alt-text="Screenshot illustrating how to unshare a workspace.":::
+### Restrict access to workspace models
+
+> [!WARNING]
+> **Restrict access** blocks runtime translation requests to all published models in the workspace if the requests don't include the same Translator resource that was used to create the workspace.
+
+Select the **Yes** checkbox. Within few minutes, all published models are secured from unauthorized access.
++ ## Next steps > [!div class="nextstepaction"]
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
Add more capabilities to your apps and workflows by utilizing other Cognitive Se
* [Computer Vision](../computer-vision/language-support.md) * [Speech](../speech-service/language-support.md)
-* [Language service](../language-service/index.yml)
- * Select the feature you want to use, and then **Language support** on the left navigation menu.
+* [Language service](../language-service/concepts/language-support.md)
View all [Cognitive Services](../index.yml).
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/language-support.md
Use this article to learn which natural languages are supported by Language Dete
> [!NOTE]
-> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`. However, to detect Japanese in better quality, we recommend you use the previous version `2021-11-20`.
+> Languages are added as new [model versions](how-to/call-api.md#specify-the-language-detection-model) are released. The current model version for Language Detection is `2022-10-01`.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. The returned language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers.
If you have content expressed in a less frequently used language, you can try La
| Armenian | `hy` | | | Assamese | `as` | 2021-01-05 | | Azerbaijani | `az` | 2021-01-05 |
+| Bashkir | `ba` | 2022-10-01 |
| Basque | `eu` | | | Belarusian | `be` | | | Bengali | `bn` | |
If you have content expressed in a less frequently used language, you can try La
| Chinese | `zh` | | | Chinese Simplified | `zh_chs` | | | Chinese Traditional | `zh_cht` | |
+| Chuvash | `cv` | 2022-10-01 |
| Corsican | `co` | 2021-01-05 | | Croatian | `hr` | | | Czech | `cs` | |
If you have content expressed in a less frequently used language, you can try La
| English | `en` | | | Esperanto | `eo` | | | Estonian | `et` | |
+| Faroese | `fo` | 2022-10-01 |
| Fijian | `fj` | 2020-09-01 | | Finnish | `fi` | | | French | `fr` | |
If you have content expressed in a less frequently used language, you can try La
| Kannada | `kn` | | | Kazakh | `kk` | 2020-09-01 | | Kinyarwanda | `rw` | 2021-01-05 |
-| Kirghiz | `ky` | 2021-01-05 |
+| Kirghiz | `ky` | 2022-10-01 |
| Korean | `ko` | | | Kurdish | `ku` | | | Lao | `lo` | |
If you have content expressed in a less frequently used language, you can try La
| Tongan | `to` | 2020-09-01 | | Turkish | `tr` | 2021-01-05 | | Turkmen | `tk` | 2021-01-05 |
+| Upper Sorbian | `hsb` | 2022-10-01 |
+| Uyghur | `ug` | 2022-10-01 |
| Ukrainian | `uk` | | | Urdu | `ur` | | | Uzbek | `uz` | |
If you have content expressed in a less frequently used language, you can try La
| Yucatec Maya | `yua` | | | Zulu | `zu` | 2021-01-05 |
+## Romanized Indic Languages supported by Language Detection
+
+| Language | Language Code | Starting with model version: |
+||||
+| Assamese | `as` | 2022-10-01 |
+| Bengali | `bn` | 2022-10-01 |
+| Gujarati | `gu` | 2022-10-01 |
+| Hindi | `hi` | 2022-10-01 |
+| Kannada | `kn` | 2022-10-01 |
+| Malayalam | `ml` | 2022-10-01 |
+| Marathi | `mr` | 2022-10-01 |
+| Oriya | `or` | 2022-10-01 |
+| Punjabi | `pa` | 2022-10-01 |
+| Tamil | `ta` | 2022-10-01 |
+| Telugu | `te` | 2022-10-01 |
+| Urdu | `ur` | 2022-10-01 |
+ ## Next steps [Language detection overview](overview.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/language-support.md
Use this article to learn which natural languages are supported by the NER featu
> [!NOTE] > * Languages are added as new [model versions](how-to-call.md#specify-the-ner-model) are released.
-> * The language support below is for model version `2022-10-01-preview`.
+> * The language support below is for model version `2023-02-01-preview`.
## NER language support
-|Language |Language code|Supports resolution|Notes |
-||-|--||
-|Arabic |`ar` | | |
-|Chinese-Simplified |`zh-hans` |Γ£ô |`zh` also accepted|
-|Chinese-Traditional |`zh-hant` | | |
-|Czech |`cs` | | |
-|Danish |`da` | | |
-|Dutch |`nl` |Γ£ô | |
-|English |`en` |Γ£ô | |
-|Finnish |`fi` | | |
-|French |`fr` |Γ£ô | |
-|German |`de` |Γ£ô | |
-|Hebrew |`he` | | |
-|Hindi |`hi` |Γ£ô | |
-|Hungarian |`hu` | | |
-|Italian |`it` |Γ£ô | |
-|Japanese |`ja` |Γ£ô | |
-|Korean |`ko` | | |
-|Norwegian (Bokmål) |`no` | |`nb` also accepted|
-|Polish |`pl` | | |
-|Portuguese (Brazil) |`pt-BR` |Γ£ô | |
-|Portuguese (Portugal)|`pt-PT` | |`pt` also accepted|
-|Russian |`ru` | | |
-|Spanish |`es` |Γ£ô | |
-|Swedish |`sv` | | |
-|Turkish |`tr` |Γ£ô | |
-
+|Language|Language Code|Supports resolution|Notes|
+|:-|:-|:-|:-|
+|Afrikaans|`af`| | |
+|Albanian|`sq`| | |
+|Amharic|`am`| | |
+|Arabic|`ar`| | |
+|Armenian|`hy`| | |
+|Assamese|`as`| | |
+|Azerbaijani|`az`| | |
+|Basque|`eu`| | |
+|Bengali|`bn`| | |
+|Bosnian|`bs`| | |
+|Bulgarian|`bg`| | |
+|Burmese|`my`| | |
+|Catalan|`ca`| | |
+|Chinese (Simplified)|`zh-Hans`|Γ£ô |`zh` also accepted|
+|Chinese (Traditional)|`zh-Hant`| | |
+|Croatian|`hr`| | |
+|Czech|`cs`| | |
+|Danish|`da`| | |
+|Dutch|`nl`|Γ£ô | |
+|English|`en`|Γ£ô | |
+|Estonian|`et`| | |
+|Finnish|`fi`| | |
+|French|`fr`|Γ£ô | |
+|Galician|`gl`| | |
+|Georgian|`ka`| | |
+|German|`de`|Γ£ô | |
+|Greek|`el`| | |
+|Gujarati|`gu`| | |
+|Hebrew|`he`| | |
+|Hindi|`hi`|Γ£ô | |
+|Hungarian|`hu`| | |
+|Indonesian|`id`| | |
+|Irish|`ga`| | |
+|Italian|`it`|Γ£ô | |
+|Japanese|`ji`|Γ£ô | |
+|Kannada|`kn`| | |
+|Kazakh|`kk`| | |
+|Khmer|`km`| | |
+|Korean|`ko`| | |
+|Kurdish (Kurmanji)|`ku`| | |
+|Kyrgyz|`ky`| | |
+|Lao|`lo`| | |
+|Latvian|`lv`| | |
+|Lithuanian|`lt`| | |
+|Macedonian|`mk`| | |
+|Malagasy|`mg`| | |
+|Malay|`ms`| | |
+|Malayalam|`ml`| | |
+|Marathi|`mr`| | |
+|Mongolian|`mn`| | |
+|Nepali|`ne`| | |
+|Norwegian (Bokmal)|`no`| |`nb` also accepted|
+|Oriya|`or`| | |
+|Pashto|`ps`| | |
+|Persian|`fa`| | |
+|Polish|`pl`| | |
+|Portuguese (Brazil)|`pt-BR`|Γ£ô | |
+|Portuguese (Portugal)|`pt-PT`| |`pt` also accepted|
+|Punjabi|`pa`| | |
+|Romanian|`ro`| | |
+|Russian|`ru`| | |
+|Serbian|`sr`| | |
+|Slovak|`sk`| | |
+|Slovenian|`sl`| | |
+|Somali|`so`| | |
+|Spanish|`es`|Γ£ô | |
+|Swahili|`sw`| | |
+|Swazi|`ss`| | |
+|Swedish|`sv`| | |
+|Tamil|`ta`| | |
+|Telugu|`te`| | |
+|Thai|`th`| | |
+|Turkish|`tr`|Γ£ô | |
+|Ukrainian|`uk`| | |
+|Urdu|`ur`| | |
+|Uyghur|`ug`| | |
+|Uzbek|`uz`| | |
+|Vietnamese|`vi`| | |
+|Welsh|`cy`| | |
## Next steps
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below. * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md). * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
+* The latest model version (2022-10-01) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
## March 2023
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow. * New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health.
-* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy.
+* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy and more [language support](./named-entity-recognition/language-support.md) with up to 79 languages.
## December 2022
communication-services Enable Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/enable-closed-captions.md
+
+ Title: Enable Closed captions with Teams Interoperability
+
+description: Conceptual information about closed captions in Teams interop scenarios
++++ Last updated : 03/22/2023++++
+# Enable Closed captions with Teams Interoperability
++
+Closed captions are a textual representation of a voice or video conversation that is displayed to users in real-time. Azure Communication Services Closed captions offer developers the ability to allow users to select when they wish to toggle captions on or off. These captions are only available during the call/meeting for the user that has selected to enable captions, ACS does **not** store these captions anywhere. Closed captions can be accessed through Azure Communication Services client-side SDKs for Web, Windows, iOS and Android.
+
+In this document we're going to be looking at specifically Teams interoperability scenarios. For example, an Azure Communication Services user joins a Teams meeting and enabling captions or two Microsoft 365 users using Azure Communication Calling SDK to join a call or meeting.
+
+## Supported scenarios
+
+### Basic Closed Captions
+| Feature | ACS user | Microsoft 365 user with ACS calling SDK | Teams user in Teams app |
+| - | | -- | -- |
+| Enable captions in ACS call | N/A | ✔︎ | ✔︎ |
+| Enable captions in ACS rooms | N/A | ✔︎ | ✔︎ |
+| Enable captions in Teams meeting | ✔︎ | ✔︎ | ✔︎ |
+| Enable captions in Teams call | ✔︎ | ✔︎ | ✔︎ |
+
+*Γ£ò = not supprted in current release.*
+
+### Translated Captions
+
+| Feature | ACS user | Microsoft 365 user with ACS calling SDK | Teams user in Teams app |
+| - | -- | -- | -- |
+| Enable captions in ACS call | N/A | ✔︎ | ✔︎ |
+| Enable captions in ACS rooms | N/A | ✔︎ | ✔︎ |
+| Enable captions in Teams meeting | ✔︎ | ✔︎ | ✔︎ |
+| Enable captions in Teams call | ✔︎ | ✔︎ | ✔︎ |
+
+*Usage of translations through Teams generated captions requires the organizer to have assigned a Teams Premium license, or in the case of Microsoft 365 users they must have a Teams premium license. More information about Teams Premium can be found [here](https://www.microsoft.com/microsoft-teams/premium#tabx93f55452286a4264a2778ef8902fb81a).*
+
+In scenarios where there's a Teams user on a Teams client or a Microsoft 365 user with ACS SDKs in the call, the developer can use Teams caption. This allows developers to work with the Teams captioning technology that may already be familiar with today. With Teams captions developers are limited to what their Teams license allows. Basic captions allows only one spoken and one caption language for the call. With Teams premium license developers can use the translation functionality offered by Teams to provide one spoken language for the call and translated caption languages on a per user basis. In a Teams interop scenario, captions enabled through ACS follows the same policies that are defined in Teams for [meetings](/powershell/module/skype/set-csteamsmeetingpolicy) and [calls](/powershell/module/skype/set-csteamscallingpolicy).
+
+## Common use cases
+
+### Building accessible experiences
+Accessibility ΓÇô For people with hearing impairments or who are new to the language to participate in calls and meetings. A key feature requirement in the Telemedical industry is to help patients communicate effectively with their health care providers.
+
+### Teams interoperability
+Use Teams ΓÇô Organizations using ACS and Teams can use Teams closed captions to improve their applications by providing closed captions capabilities to users. Those organizations can keep using Microsoft Teams for all calls and meetings without third party applications providing this capability.
+
+### Global inclusivity
+Provide translation ΓÇô Use the translation functions provided to provide translated captions for users who may be new to the language or for companies that operate at a global scale and have offices around the world, their teams can have conversations even if some people might not be familiar with the spoken language.
+
+## Sample architecture of ACS user using captions in a Teams meeting
+![Diagram of Teams meeting interop](./media/acs-teams-interop-captions.png)
+
+## Sample architecture of an ACS user using captions in a meeting with a Microsoft 365 user on ACS SDK
+![Diagram of CTE user](./media/m365-captions-interop.png)
++
+## Privacy concerns
+
+Closed captions are only available during the call or meeting for the participant that has selected to enable captions, ACS does not store these captions anywhere. Many countries and states have laws and regulations that apply to storing of data. It is your responsibility to use the closed captions in compliance with the law should you choose to store any of the data generated through closed captions. You must obtain consent from the parties involved in a manner that complies with the laws applicable to each participant.
+
+Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chats. It is your responsibility to ensure that the users of your application are notified when closed captions are enabled in a Teams call or meeting and being stored.
+
+Microsoft indicates to you via the Azure Communication Services API that recording or closed captions has commenced, and you must communicate this fact, in real-time, to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred due to your failure to comply with this obligation.
+
+## Next steps
+
+- Learn how to use [closed captions for Teams interopability](../../how-tos/calling-sdk/closed-captions-teams-interop-how-to.md).
++
communication-services Closed Captions Teams Interop How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/closed-captions-teams-interop-how-to.md
+
+ Title: Enable Closed captions during a call
+
+description: Provides a how-to guide enabling Closed captions during a call.
++++ Last updated : 03/20/2023+++
+zone_pivot_groups: acs-plat-web-ios-android-windows
++
+# Enable Closed captions for Teams interoperability
+
+Learn how to allow your users to enable closed captions during a Teams interoperability scenario where your users might be in a meeting between an ACS user and a Teams client user, or where your users are using ACS calling SDK with their Microsoft 365 identity.
++++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources here](../../quickstarts/create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+- Learn about [Voice](./manage-calls.md) and [Video calling](./manage-video.md).
+- Learn about [Teams interoperability](./teams-interoperability.md).
+- Learn more about Microsoft Teams [live translated captions](https://support.microsoft.com//office/use-live-captions-in-a-teams-meeting-4be2d304-f675-4b57-8347-cbd000a21260).
communication-services Lobby Admit And Reject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/lobby-admit-and-reject.md
+
+ Title: Admit and reject users from Teams meeting lobby
+
+description: Use Azure Communication Services SDKs to admit or reject users from Teams meeting lobby.
+++++ Last updated : 03/14/2023++++
+# Manage Teams meeting lobby
+
+APIs lobby admit and reject on `Call` or `TeamsCall` class allow users to admit and reject participants from Teams meeting lobby.
+
+In this article, you will learn how to admit and reject participants from Microsoft Teams meetings lobby by using Azure Communication Service calling SDKs.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+User ends up in the lobby depending on Microsoft Teams configuration. The controls are described here:
+[Learn more about Teams configuration ](../../concepts/interop/guest/teams-administration.md)
+
+Microsoft 365 or Azure Communication Services users can admit or reject users from lobby, if they are connected to Teams meeting and have Organizer, Co-organizer, or Presenter meeting role.
+[Learn more about meeting roles](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019)
+
+To update or check current meeting join & lobby policies in Teams admin center:
+[Learn more about Teams policies](/microsoftteams/settings-policies-reference#automatically-admit-people)
++
+### Get remote participant properties
+
+The first thing is to get the `Call` or `TeamsCall` object of admitter: [Learn how to join Teams meeting](./teams-interoperability.md)
+
+To know who is in the lobby, you could check the state of a remote participant. The `remoteParticipant` with `InLobby` state indicates that remote participant is in lobby.
+To get the `remoteParticipants` collection:
+
+```js
+let remoteParticipants = call.remoteParticipants; // [remoteParticipant, remoteParticipant....]
+```
+
+To get the state of a remote participant:
+
+```js
+const state = remoteParticipant.state;
+```
+
+You could check remote participant state in subscription method:
+[Learn more about events and subscription ](./events.md)
+
+```js
+// Subscribe to a call obj.
+// Listen for property changes and collection updates.
+subscribeToCall = (call) => {
+ try {
+ // Inspect the call's current remote participants and subscribe to them.
+ call.remoteParticipants.forEach(remoteParticipant => {
+ subscribeToRemoteParticipant(remoteParticipant);
+ })
+ // Subscribe to the call's 'remoteParticipantsUpdated' event to be
+ // notified when new participants are added to the call or removed from the call.
+ call.on('remoteParticipantsUpdated', e => {
+ // Subscribe to new remote participants that are added to the call.
+ e.added.forEach(remoteParticipant => {
+ subscribeToRemoteParticipant(remoteParticipant)
+ });
+ // Unsubscribe from participants that are removed from the call
+ e.removed.forEach(remoteParticipant => {
+ console.log('Remote participant removed from the call.');
+ })
+ });
+ } catch (error) {
+ console.error(error);
+ }
+}
+
+// Subscribe to a remote participant obj.
+// Listen for property changes and collection updates.
+subscribeToRemoteParticipant = (remoteParticipant) => {
+ try {
+ // Inspect the initial remoteParticipant.state value.
+ console.log(`Remote participant state: ${remoteParticipant.state}`);
+ if(remoteParticipant.state === 'InLobby'){
+ console.log(`${remoteParticipant._displayName} is in the lobby`);
+ }
+ // Subscribe to remoteParticipant's 'stateChanged' event for value changes.
+ remoteParticipant.on('stateChanged', () => {
+ console.log(`Remote participant state changed: ${remoteParticipant.state}`);
+ if(remoteParticipant.state === 'InLobby'){
+ console.log(`${remoteParticipant._displayName} is in the lobby`);
+ }
+ else if(remoteParticipant.state === 'Connected'){
+ console.log(`${remoteParticipant._displayName} is in the meeting`);
+ }
+ });
+ } catch (error) {
+ console.error(error);
+ }
+}
+```
+
+Before admit or reject `remoteParticipant` with `InLobby` state, you could get the identifier for a remote participant:
+
+```js
+const identifier = remoteParticipant.identifier;
+```
+
+The `identifier` can be one of the following `CommunicationIdentifier` types:
+
+- `{ communicationUserId: '<COMMUNICATION_SERVICES_USER_ID'> }`: Object representing the Azure Communication Services user.
+- `{ phoneNumber: '<PHONE_NUMBER>' }`: Object representing the phone number in E.164 format.
+- `{ microsoftTeamsUserId: '<MICROSOFT_TEAMS_USER_ID>', isAnonymous?: boolean; cloud?: "public" | "dod" | "gcch" }`: Object representing the Teams user.
+- `{ id: string }`: object representing identifier that doesn't fit any of the other identifier types
+
+### Start lobby operations
+
+To admit, reject or admit all users from the lobby, you can use the `admit`, `rejectParticipant` and `admitAll` asynchronous APIs:
+
+You can admit specific user to the Teams meeting from lobby by calling the method `admit` on the object `TeamsCall` or `Call`. The method accepts identifiers `MicrosoftTeamsUserIdentifier`, `CommunicationUserIdentifier`, `PhoneNumberIdentifier` or `UnknownIdentifier` as input.
+
+```js
+await call.admit(identifier);
+```
+
+You can also reject specific user to the Teams meeting from lobby by calling the method `rejectParticipant` on the object `TeamsCall` or `Call`. The method accepts identifiers `MicrosoftTeamsUserIdentifier`, `CommunicationUserIdentifier`, `PhoneNumberIdentifier` or `UnknownIdentifier` as input.
+
+```js
+await call.rejectParticipant(identifier);
+```
+
+You can also admit all users in the lobby by calling the method `admitAll` on the object `TeamsCall` or `Call`.
+
+```js
+await call.admitAll();
+```
+
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage Teams calls](../cte-calling-sdk/manage-calls.md)
+- [Learn how to join Teams meeting](./teams-interoperability.md)
+- [Learn how to manage video](./manage-video.md)
communication-services Raise Hand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/raise-hand.md
Title: Raise hand states
-description: Use Azure Communication Services SDKs to send raise hand state.
+description: Use Azure Communication Services SDKs to send raised hand state.
Last updated 09/09/2022
-zone_pivot_groups: acs-web-android
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to learn how to send and receive Raise Hand state using SDK.
During an active call, you may want to send or receive states from other users.
[!INCLUDE [Raise Hand Client-side Android](./includes/raise-hand/raise-hand-android.md)] ::: zone-end ++
+Additional resources
+For more information about using the Raise Hand feature in Teams calls and meetings, see the [Microsoft Teams documentation](https://support.microsoft.com/en-us/office/raise-your-hand-in-a-teams-meeting-bb2dd8e1-e6bd-43a6-85cf-30822667b372).
++ ## Next steps - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md)
communication-services Domain Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md
Make sure to add and verify domain name portion of the FQDN and keep in mind tha
1. Reenter the domain name. 1. Select Confirm and then select Add.
-[ ![Screenshot of adding a custom domain.](./media/direct-routing-add-domain.png)](./media/direct-routing-add-domain.png#lightbox)
+[![Screenshot of adding a custom domain.](./media/direct-routing-add-domain.png)](./media/direct-routing-add-domain.png#lightbox)
#### Verify domain ownership 1. Select Verify next to new domain that is now visible in DomainΓÇÖs list. 1. Azure portal generates a value for a TXT record, you need to add that record to
-[ ![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
+[![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
>[!Note] >It might take up to 30 minutes for new DNS record to propagate on the Internet 3. Select Next. If everything is set up correctly, you should see Domain status changed to *Verified* next to the added domain.
-[ ![Screenshot of a verified domain.](./media/direct-routing-domain-verified.png)](./media/direct-routing-domain-verified.png#lightbox)
+[![Screenshot of a verified domain.](./media/direct-routing-domain-verified.png)](./media/direct-routing-domain-verified.png#lightbox)
#### Remove domain from Azure Communication Services If you want to remove a domain from your Azure Communication Services direct routing configuration, select the checkbox fir a corresponding domain name, and select *Remove*.
-[ ![Screenshot of removing a custom domain.](./media/direct-routing-remove-domain.png)](./media/direct-routing-remove-domain.png#lightbox)
+[![Screenshot of removing a custom domain.](./media/direct-routing-remove-domain.png)](./media/direct-routing-remove-domain.png#lightbox)
## Next steps:
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
Previously updated : 03/31/2023 Last updated : 04/10/2023
-zone_pivot_groups: acs-azcli-js-csharp-java-python-power-platform
+zone_pivot_groups: acs-azcli-js-csharp-java-python-logic-apps
# Quickstart: How to send an email using Azure Communication Service
In this quick start, you'll learn about how to send email using our Email SDKs.
[!INCLUDE [Send Email with Python SDK](./includes/send-email-python.md)] ::: zone-end ::: zone-end ## Troubleshooting
-To troubleshoot issues related to email delivery, you can get status of the email delivery to capture delivery details.
+To troubleshoot issues related to email delivery, you can get the status of the email delivery to capture delivery details.
## Clean up Azure Communication Service resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+To clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other associated resources. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
## Next steps
-In this quick start, you learned how to send emails using Azure Communication Services.
+In this quickstart, you learned how to send emails using Azure Communication Services. You might also want to:
-You may also want to:
-
+ - Learn about [Email concepts](../../concepts/email/email-overview.md).
+ - Familiarize yourself with [email client library](../../concepts/email/sdk-features.md).
- Learn more about [how to send a chat message](../chat/logic-app.md) from Power Automate using Azure Communication Services.
+ - Learn more about access tokens check in [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
Previously updated : 3/1/2023 Last updated : 04/11/2023
Features of confidential computing nodes include:
> [!NOTE] > DCsv2/DCsv3 VMs use specialized hardware that's subject region availability. For more information, see the [available SKUs and supported regions](virtual-machine-solutions-sgx.md). - ## Prerequisites This quickstart requires:
Now create an AKS cluster, with the confidential computing add-on enabled, by us
```azurecli-interactive az aks create -g myResourceGroup --name myAKSCluster --generate-ssh-keys --enable-addons confcom ```+ The above command will deploy a new AKS cluster with system node pool of non confidential computing node. Confidential computing Intel SGX nodes are not recommended for system node pools. ### Add a user node pool with confidential computing capabilities to the AKS cluster<a id="add-a-user-node-pool-with-confidential-computing-capabilities-to-the-aks-cluster"></a>
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Use the `kubectl get pods` command to verify that the nodes are created properly and the SGX-related DaemonSets are running on DCsv2 node pools:
-```console
-$ kubectl get pods --all-namespaces
+```bash
+kubectl get pods --all-namespaces
+```
+```output
kube-system sgx-device-plugin-xxxx 1/1 Running ```
az aks nodepool list --cluster-name myAKSCluster --resource-group myResourceGrou
Sign in to your existing AKS cluster to perform the following verification:
-```console
+```bash
kubectl get nodes ``` The output should show the newly added *confcompool1* pool on the AKS cluster. You might also see other DaemonSets.
-```console
-$ kubectl get pods --all-namespaces
+```bash
+kubectl get pods --all-namespaces
+```
+```output
kube-system sgx-device-plugin-xxxx 1/1 Running ``` If the output matches the preceding code, your AKS cluster is now ready to run confidential applications. ## Deploy Hello World from an isolated enclave application <a id="hello-world"></a>+ You're now ready to deploy a test application. Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on.
spec:
sgx.intel.com/epc: 5Mi # This limit will automatically place the job into a confidential computing node and mount the required driver volumes. sgx limit setting needs "confcom" AKS Addon as referenced above. restartPolicy: Never backoffLimit: 0
- ```
+```
+ Alternatively you can also do a node pool selection deployment for your container deployments as shown below ```yaml
spec:
kubernetes.azure.com/sgx_epc_mem_in_MiB: 10 restartPolicy: "Never" backoffLimit: 0
- ```
+```
Now use the `kubectl apply` command to create a sample job that will open in a secure enclave, as shown in the following example output:
-```console
-$ kubectl apply -f hello-world-enclave.yaml
+```bash
+kubectl apply -f hello-world-enclave.yaml
+```
+```output
job "sgx-test" created ``` You can confirm that the workload successfully created a Trusted Execution Environment (enclave) by running the following commands:
-```console
-$ kubectl get jobs -l app=sgx-test
+```bash
+kubectl get jobs -l app=sgx-test
+```
+```output
NAME COMPLETIONS DURATION AGE sgx-test 1/1 1s 23s ```
-```console
-$ kubectl get pods -l app=sgx-test
+```bash
+kubectl get pods -l app=sgx-test
+```
+```output
NAME READY STATUS RESTARTS AGE sgx-test-rchvg 0/1 Completed 0 25s ```
-```console
-$ kubectl logs -l app=sgx-test
+```bash
+kubectl logs -l app=sgx-test
+```
+```output
Hello world from the enclave Enclave called into host to print: Hello World! ```
az aks delete --resource-group myResourceGroup --cluster-name myAKSCluster
## Next steps
-* Run Python, Node, or other applications through confidential containers using ISV/OSS SGX wrapper software. Review [confidential container samples in GitHub](https://github.com/Azure-Samples/confidential-container-samples).
+- Run Python, Node, or other applications through confidential containers using ISV/OSS SGX wrapper software. Review [confidential container samples in GitHub](https://github.com/Azure-Samples/confidential-container-samples).
-* Run enclave-aware applications by using the [enclave-aware Azure container samples in GitHub](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
+- Run enclave-aware applications by using the [enclave-aware Azure container samples in GitHub](https://github.com/Azure-Samples/confidential-computing/blob/main/containersamples/).
<!-- LINKS --> [az-group-create]: /cli/azure/group#az_group_create
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
Previously updated : 11/14/2022 Last updated : 04/11/2023
Depending on your [type of scenario](guest-attestation-confidential-vms.md#scena
## Prerequisites - An Azure subscription.-- An Azure [confidential VM](quick-create-confidential-vm-portal-amd.md) or a [VM with trusted launch enabled](../virtual-machines/trusted-launch-portal.md). You can use a Linux or Windows VM.
+- An Azure [confidential VM](quick-create-confidential-vm-portal-amd.md) or a [VM with trusted launch enabled](../virtual-machines/trusted-launch-portal.md). You can use a Ubuntu Linux VM or Windows VM.
+ ## Use sample application To use a sample application in C++ for use with the guest attestation APIs, follow the instructions for your operating system (OS).
-#### [Linux](#tab/linux)
+#### [Ubuntu](#tab/linux)
1. Sign in to your VM.
To use a sample application in C++ for use with the guest attestation APIs, foll
- ## Next steps - [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
confidential-computing Quick Create Confidential Vm Arm Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md
Previously updated : 7/14/2022 Last updated : 04/12/2023 ms.devlang: azurecli
To create and deploy your confidential VM using an ARM template through the Azur
1. Sign in to your Azure account in the Azure CLI.
- ```azurecli
+ ```azurecli-interactive
az login ``` 1. Set your Azure subscription. Replace `<subscription-id>` with your subscription identifier. Make sure to use a subscription that meets the [prerequisites](#prerequisites).
- ```azurecli
+ ```azurecli-interactive
az account set --subscription <subscription-id> ```
To create and deploy your confidential VM using an ARM template through the Azur
``` If the resource group you specified doesn't exist, create a resource group with that name.
-
- ```azurecli
+
+ ```azurecli-interactive
az group create -n $resourceGroup -l $region ``` 1. Deploy your VM to Azure using an ARM template with a custom parameter file
-
- ```azurecli
+ ```azurecli-interactive
az deployment group create ` -g $resourceGroup ` -n $deployName `
To create and deploy your confidential VM using an ARM template through the Azur
vmName=$vmName ``` - ### Define custom parameter file When you create a confidential VM through the Azure Command-Line Interface (Azure CLI), you need to define a custom parameter file. To create a custom JSON parameter file:
Use this example to create a custom parameter file for a Linux-based confidentia
} ```
+> [!NOTE]
+> Replace the osImageName value accordingly.
+ ## Deploy confidential VM template with OS disk confidential encryption via customer-managed key 1. Sign in to your Azure account through the Azure CLI.
Use this example to create a custom parameter file for a Linux-based confidentia
1. Set your Azure subscription. Replace `<subscription-id>` with your subscription identifier. Make sure to use a subscription that meets the [prerequisites](#prerequisites).
- ```azurecli
+ ```azurecli-interactive
az account set --subscription <subscription-id> ```+ 1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role.
-
- ```azurecli
+
+ ```azurecli-interactive
Connect-AzureAD -Tenant "your tenant ID" New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator" ```+ 1. Set up your Azure key vault. For how to use an Azure Key Vault Managed HSM instead, see the next step. 1. Create a resource group for your key vault. Your key vault instance and your confidential VM must be in the same Azure region.
-
- ```azurecli
+
+ ```azurecli-interactive
$resourceGroup = <key vault resource group> $region = <Azure region> az group create --name $resourceGroup --location $region ```
-
+ 1. Create a key vault instance with a premium SKU in your preferred region.
-
- ```azurecli
+
+ ```azurecli-interactive
$KeyVault = <name of key vault> az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection ``` 1. Make sure that you have an **owner** role in this key vault.
-
1. Give `Confidential VM Orchestrator` permissions to `get` and `release` the key vault.
-
- ```azurecli
+
+ ```azurecli-interactive
$cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json az keyvault set-policy --name $KeyVault --object-id $cvmAgent.objectId --key-permissions get release ```
Use this example to create a custom parameter file for a Linux-based confidentia
1. (Optional) If you don't want to use an Azure key vault, you can create an Azure Key Vault Managed HSM instead. 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
-
1. Enable purge protection on the Azure Managed HSM. This step is required to enable key release.
- ```azurecli
+ ```azurecli-interactive
az keyvault update-hsm --subscription $subscriptionId -g $resourceGroup --hsm-name $hsm --enable-purge-protection true ``` - 1. Give `Confidential VM Orchestrator` permissions to managed HSM.
-
- ```azurecli
+
+ ```azurecli-interactive
$cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.objectId --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName ```
Use this example to create a custom parameter file for a Linux-based confidentia
1. Create a new key using Azure Key Vault. For how to use an Azure Managed HSM instead, see the next step. 1. Prepare and download the [key release policy](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/skr-policy.json) to your local disk.
-
1. Create a new key.
- ```azurecli
+ ```azurecli-interactive
$KeyName = <name of key> $KeySize = 3072 az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json" ``` 1. Get information about the key that you created.
-
- ```azurecli
+
+ ```azurecli-interactive
$encryptionKeyVaultId = ((az keyvault show -n $KeyVault -g $resourceGroup) | ConvertFrom-Json).id $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid ```
-
+ 1. Deploy a Disk Encryption Set (DES) using a [DES ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployDES.json) (`deployDES.json`).
- ```azurecli
+ ```azurecli-interactive
$desName = <name of DES> $deployName = <name of deployment> $desArmTemplate = <name of DES ARM template file>
Use this example to create a custom parameter file for a Linux-based confidentia
1. Assign key access to the DES file.
- ```azurecli
+ ```azurecli-interactive
$desIdentity= (az disk-encryption-set show -n $desName -g $resourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n $KeyVault `
Use this example to create a custom parameter file for a Linux-based confidentia
``` 1. (Optional) Create a new key from an Azure Managed HSM.- 1. Prepare and download the [key release policy](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/skr-policy.json) to your local disk.
-
1. Create the new key.
- ```azurecli
+ ```azurecli-interactive
$KeyName = <name of key> $KeySize = 3072 az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json" ``` 1. Get information about the key that you created.
-
- ```azurecli
+
+ ```azurecli-interactive
$encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid ```
-
+ 1. Deploy a DES.
- ```azurecli
+ ```azurecli-interactive
$desName = <name of DES> az disk-encryption-set create -n $desName ` -g $resourceGroup `
Use this example to create a custom parameter file for a Linux-based confidentia
1. Assign key access to the DES.
- ```azurecli
+ ```azurecli-interactive
desIdentity=$(az disk-encryption-set show -n $desName -g $resourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n $hsm ` -g $resourceGroup `
Use this example to create a custom parameter file for a Linux-based confidentia
``` 1. Deploy your confidential VM with the customer-managed key.
-
+ 1. Get the resource ID for the DES.
- ```azurecli
+ ```azurecli-interactive
$desID = (az disk-encryption-set show -n $desName -g $resourceGroup --query [id] -o tsv) ```
-
+ 1. Deploy your confidential VM using the [confidential VM ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployCPSCVM_cmk.json) (`deployCPSCVM_cmk.json`) and a [deployment parameter file](#example-deployment-parameter-file) (for example, `azuredeploy.parameters.win2022.json`) with the customer-managed key.
-
- ```azurecli
+
+ ```azurecli-interactive
$deployName = <name of deployment> $vmName = <name of confidential VM> $cvmArmTemplate = <name of confidential VM ARM template file>
Use this example to create a custom parameter file for a Linux-based confidentia
``` 1. Connect to your confidential VM to make sure the creation was successful.
-
+ ### Example deployment parameter file This is an example parameter file for a Windows Server 2022 Gen 2 confidential VM:
This is an example parameter file for a Windows Server 2022 Gen 2 confidential V
} } }
-```
+```
## Next steps
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There are two architectures in Container Apps: the Consumption only architecture
| Architecture Type | Description | |--|-|
-| Workload profiles architecture (preview) | Supports user defined routes (UDR) and egress through NAT Gateway when using a custom virtual network. The minimum required subnet size is /27. |
+| Workload profiles architecture (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. <br /> <br /> As workload profiles are currently in preview, the number of supported regions is limited. To learn more, visit the [workload profiles overview](./workload-profiles-overview.md#supported-regions).|
| Consumption only architecture | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. | ## Accessibility Levels
container-apps Waf App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/waf-app-gateway.md
This step is required for internal only container app environments as it allows
| Setting | Action | |||
- | Name | Enter **my-agw-private-link. |
+ | Name | Enter **my-agw-private-link**. |
| Private link subnet | Select the subnet you wish to create the private link with. | | Frontend IP Configuration | Select the frontend IP for your Application Gateway. |
container-apps Workload Profiles Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-cli.md
Previously updated : 04/10/2023 Last updated : 04/11/2023
+zone_pivot_groups: container-apps-vnet-types
# Manage workload profiles in a Consumption + Dedicated workload profiles plan structure (preview)
+Learn to manage a Container Apps environment with workload profile support.
+ ## Supported regions The following regions support workload profiles during preview:
The following regions support workload profiles during preview:
## Create a container app in a profile
-At a high level, when you create a container app into a workload profile, you go through the following steps:
+
+Azure Container Apps run in an environment, which uses a virtual network (VNet). By default, your Container App environment is created with a managed VNet that is automatically generated for you. Generated VNets are inaccessible to you as they're created in Microsoft's tenant.
+
+Create a container apps environment with a [custom VNet](./workload-profiles-manage-cli.md?pivots=aca-vnet-custom) if you need any of the following features:
+
+- [User defined routes](user-defined-routes.md)
+- Integration with Application Gateway
+- Network Security Groups
+- Communicating with resources behind private endpoints in your virtual network
+ -- Select a workload profile-- Create or provide a VNet-- Create a subnet with a `Microsoft.App/environments` delegation-- Create a new environment-- Create a container app associated with the workload profile in the environment
-Use the following commands to create an environment with a workload profile.
+When you create an environment with a custom VNet, you have full control over the VNet configuration. This amount of control gives you the option to implement the following features:
-1. Create a VNet
+- [User defined routes](user-defined-routes.md)
+- Integration with Application Gateway
+- Network Security Groups
+- Communicating with resources behind private endpoints in your virtual network
++
+Use the following commands to create an environment with workload profile support.
++
+1. Create a VNet.
```bash az network vnet create \
Use the following commands to create an environment with a workload profile.
--name "<VNET_NAME>" ```
-1. Create a subnet
+1. Create a subnet delegated to `Microsoft.App/environments`.
```bash az network vnet subnet create \
Use the following commands to create an environment with a workload profile.
Copy the ID value and paste into the next command.
+ The `Microsoft.App/environments` delegation is required to give the Container Apps runtime the needed control over your VNet to run workload profiles in the Container Apps environment.
+ You can specify as small as a `/27` CIDR (32 IPs-8 reserved) for the subnet. Some things to consider if you're going to specify a `/27` CIDR: - There are 11 IP addresses reserved for Container Apps infrastructure. Therefore, a `/27` CIDR has a maximum of 21 IP available addresses.
Use the following commands to create an environment with a workload profile.
||| | Every replica requires one IP. Users can't have apps with more than 21 replicas across all apps. Zero downtime deployment requires double the IPs since the old revision is running until the new revision is successfully deployed. | Every instance (VM node) requires a single IP. You can have up to 21 instances across all workload profiles, and hundreds or more replicas running on these workload profiles. |
+ ::: zone-end
+ 1. Create *Consumption + Dedicated* environment with workload profile support
+ ::: zone pivot="aca-vnet-custom"
+ >[!Note]
- > In Container Apps, you can configure whether your Container Apps will allow public ingress or only ingress from within your VNet at the environment level. In order to restrict ingress to just your VNet, you will need to set the `--internal-only` flag.
+ > In Container Apps, you can configure whether your Container Apps will allow public ingress or only ingress from within your VNet at the environment level. In order to restrict ingress to just your VNet, you need to set the `--internal-only` flag.
# [External environment](#tab/external-env)
Use the following commands to create an environment with a workload profile.
--enable-workload-profiles \ --resource-group "<RESOURCE_GROUP>" \ --name "<NAME>" \
- --location "<LOCATION>" \
- --infrastructure-subnet-resource-id "<SUBNET_ID>"
+ --location "<LOCATION>"
``` # [Internal environment](#tab/internal-env)
Use the following commands to create an environment with a workload profile.
+ ::: zone-end
+
+ ::: zone pivot="aca-vnet-managed"
+
+ ```bash
+ az containerapp env create \
+ --enable-workload-profiles \
+ --resource-group "<RESOURCE_GROUP>" \
+ --name "<NAME>" \
+ --location "<LOCATION>"
+ ```
+
+ ::: zone-end
+ This command can take up to 10 minutes to complete. 1. Check status of environment. Here, you're looking to see if the environment is created successfully.
Use the following commands to create an environment with a workload profile.
1. Create a new container app.
+ # [External environment](#tab/external-env)
+ ```azurecli az containerapp create \ --resource-group "<RESOURCE_GROUP>" \
Use the following commands to create an environment with a workload profile.
--workload-profile-name "Consumption" ```
- This command deploys the application to the built in Consumption workload profile. If you want to create an app in a dedicated workload profile, you first need to [add the profile to the environment](#add-profiles).
+ # [Internal environment](#tab/internal-env)
+
+ ```azurecli
+ az containerapp create \
+ --resource-group "<RESOURCE_GROUP>" \
+ --name "<CONTAINER_APP_NAME>" \
+ --target-port 80 \
+ --ingress internal \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --environment "<ENVIRONMENT_NAME>" \
+ --workload-profile-name "Consumption"
+ ```
+
+
+
+ This command deploys the application to the built-in Consumption workload profile. If you want to create an app in a dedicated workload profile, you first need to [add the profile to the environment](#add-profiles).
This command creates the new application in the environment using a specific workload profile.
container-apps Workload Profiles Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-portal.md
+
+ Title: Create a Consumption + Dedicated workload profiles environment (preview) in the Azure portal
+description: Learn to create an environment with a specialized hardware profile in the Azure portal.
++++ Last updated : 04/11/2023++++
+# Manage workload profiles in a Consumption + Dedicated workload profiles plan structure (preview) in the Azure portal
+
+Learn to manage Container Apps environments with workload profile support.
+
+## Supported regions
+
+The following regions support workload profiles during preview:
+
+- North Central US
+- North Europe
+- West Europe
+- East US
+
+<a id="create"></a>
+
+## Create a container app in a workload profile
+
+1. Open the Azure portal.
+
+1. Search for *Container Apps* in the search bar, and select **Container Apps**.
+
+1. Select **Create**.
+
+1. Create a new container app and environment.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-new-environment.png" alt-text="Screenshot of the create a container apps environment window.":::
+
+ Enter the following values to create your new container app.
+
+ | Property | Value |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource group | Select or create a resource group |
+ | Container app name | Enter your container app name |
+ | Region | Select your region. |
+ | Container Apps Environment | Select **Create New**. |
+
+1. Configure the new environment.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-dedicated-environment.png" alt-text="Screenshot of create an Azure Container Apps Consumption + Dedicated plan environment window.":::
+
+ Enter the following values to create your environment.
+
+ | Property | Value |
+ | | |
+ | Environment name | Enter an environment name. |
+ | Plan | Select **(Preview) Consumption and Dedicated workload profiles** |
+
+ Select the new **Workload profiles** tab at the top of this section.
+
+1. Select the **Add workload profile** button.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-add-workload-profile.png" alt-text="Screenshot of the window to add a workload profile to the container apps environment.":::
+
+1. For *Workload profile name*, enter a name.
+
+1. Next to *Workload profile size*, select **Choose size**.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-add-workload-profile-details.png" alt-text="Screenshot of the window to select a workload profile for your container apps environment.":::
+
+1. In the *Select a workload profile size* window, select a profile from the list.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-add-workload-profile-size.png" alt-text="Screenshot of the window to select a workload profile size.":::
+
+ General purpose profiles offer a balanced mix cores vs memory for most applications.
+
+ Memory optimized profiles offer specialized hardware with increased memory capabilities.
+
+1. Select the **Select** button.
+
+1. For the *Autoscaling instance count range*, select the minimum and maximum number of instances you want available to this workload profile.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-workload-profile-slider.png" alt-text="Screenshot of the window to select the min and max instances for a workload profile.":::
+
+1. Select **Add**.
+
+1. Select **Create**.
+
+1. Select **Review + Create** and wait as Azure validates your configuration options.
+
+1. Select **Create** to create your container app and environment.
+
+## Add profiles
+
+Add a new workload profile to an existing environment.
+
+1. Under the *Settings* section, select **Workload profiles**.
+
+1. Select **Add**.
+
+1. For *Workload profile name*, enter a name.
+
+1. Next to *Workload profile size*, select **Choose size**.
+
+1. In the *Select a workload profile size* window, select a profile from the list.
+
+ General purpose profiles offer a balanced mix cores vs memory for most applications.
+
+ Memory optimized profiles offer specialized hardware with increased memory or compute capabilities.
+
+1. Select the **Select** button.
+
+1. For the *Autoscaling instance count range*, select the minimum and maximum number of instances you want available to this workload profile.
+
+ :::image type="content" source="media/workload-profiles/azure-container-apps-workload-profile-slider.png" alt-text="Screenshot of the window to select the minimum and maximum instances for a workload profile.":::
+
+1. Select **Add**.
+
+## Edit profiles
+
+Under the *Settings* section, select **Workload profiles**.
+
+From this window, you can:
+
+- Adjust the minimum and maximum number of instances available to a profile
+- Add new profiles
+- Delete existing profiles (except for the Consumption profile)
+
+## Delete a profile
+
+Under the *Settings* section, select **Workload profiles**. From this window, you select a profile to delete.
+
+> [!NOTE]
+> The *Consumption* workload profile canΓÇÖt be deleted.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Workload profiles overview](./workload-profiles-overview.md)
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Historically, containers have offered application dependency isolation and resou
### Customer data
-The ACI service stores the minimum customer data required to ensure your container groups are running as expected. Storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in [Geo](https://azure.microsoft.com/global-infrastructure/geographies/). Please get in touch with Azure Support to learn more.
+The Azure Container Instances service doesn't store customer data. It does, however, store the subscription IDs of the Azure subscription used to create resources. Storing subscription IDs is required to ensure your container groups continue running as expected.
## Custom sizes
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/resource-manager-template-samples.md
description: Use Azure Resource Manager templates to create and configure Azure
-+ Last updated 10/14/2020
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
description: Learn about subpartitioning in Azure Cosmos DB, how to use the feat
-+ Last updated 05/09/2022
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-manage-database-account.md
description: Learn how to manage Azure Cosmos DB resources by using the Azure po
-+ Last updated 03/08/2023
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
description: Learn how to configure encryption with customer-managed keys for Az
-+ Last updated 09/27/2022
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Use the Azure CLI to restore a continuous account that is already configured usi
--resource-group $resourceGroupName \ --account-name $sourceAccountName \ --target-database-account-name $targetAccountName \
- --locations regionName=$location \
+ --location $location \
--restore-timestamp $timestamp \ --assign-identity $identityId \ --default-identity "UserAssignedIdentity=$identityId" \
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-create-container.md
Last updated 04/07/2022
ms.devlang: csharp-+ # Create a collection in Azure Cosmos DB for MongoDB
cosmos-db How To Provision Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-provision-throughput.md
Last updated 11/17/2021
ms.devlang: csharp-+ # Provision database, container or autoscale throughput on Azure Cosmos DB for MongoDB resources
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/manage-with-bicep.md
description: Use Bicep to create and configure API for MongoDB Azure Cosmos DB A
-+ Last updated 05/23/2022
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/powershell-samples.md
description: Get the Azure PowerShell samples to perform common tasks in Azure
-+ Last updated 08/26/2021
cosmos-db Prevent Rate Limiting Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/prevent-rate-limiting-errors.md
description: Learn how to prevent your Azure Cosmos DB for MongoDB operations fr
-+ Last updated 08/26/2021
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-dotnet.md
ms.devlang: csharp Last updated 07/06/2022-+ # Quickstart: Azure Cosmos DB for MongoDB for .NET with the MongoDB driver
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
ms.devlang: javascript Last updated 07/06/2022-+ # Quickstart: Azure Cosmos DB for MongoDB driver for Node.js
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-python.md
ms.devlang: python Last updated 11/08/2022-+ # Quickstart: Azure Cosmos DB for MongoDB for Python with MongoDB driver
Remove-AzResourceGroup @parameters
In this quickstart, you learned how to create an Azure Cosmos DB for MongoDB account, create a database, and create a collection using the PyMongo driver. You can now dive deeper into the Azure Cosmos DB for MongoDB to import more data, perform complex queries, and manage your Azure Cosmos DB MongoDB resources. > [!div class="nextstepaction"]
-> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md)
+> [Options to migrate your on-premises or cloud data to Azure Cosmos DB](../migration-choices.md)
cosmos-db Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/resource-manager-template-samples.md
description: Use Azure Resource Manager templates to create and configure Azure
-+ Last updated 05/23/2022
cosmos-db Benchmarking Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/benchmarking-framework.md
Last updated 01/31/2023-+ # Measure Azure Cosmos DB for NoSQL performance with a benchmarking framework
cosmos-db Bicep Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/bicep-samples.md
description: Use Bicep to create and configure Azure Cosmos DB.
-+ Last updated 09/13/2021
cosmos-db Create Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/create-website.md
description: Learn how to deploy an Azure Cosmos DB account, Azure App Service W
-+ Last updated 06/19/2020
cosmos-db How To Configure Cross Origin Resource Sharing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-configure-cross-origin-resource-sharing.md
Last updated 10/11/2019 ms.devlang: javascript+ # Configure Cross-Origin Resource Sharing (CORS)
cosmos-db How To Provision Autoscale Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-provision-autoscale-throughput.md
Last updated 04/01/2022-+ # Provision autoscale throughput on database or container in Azure Cosmos DB - API for NoSQL
cosmos-db How To Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-time-to-live.md
Last updated 05/12/2022 -+ # Configure time to live in Azure Cosmos DB
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-bicep.md
description: Use Bicep to create and configure Azure Cosmos DB for API for NoSQL
-+ Last updated 02/18/2022
cosmos-db Manage With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-templates.md
description: Use Azure Resource Manager templates to create and configure Azure
-+ Last updated 02/18/2022
cosmos-db Manage With Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-terraform.md
description: Use terraform to create and configure Azure Cosmos DB for NoSQL
+ Last updated 09/16/2022
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/powershell-samples.md
description: Get the Azure PowerShell samples to perform common tasks in Azure
-+ Last updated 01/20/2021
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
Last updated 03/16/2023 -+ # Quickstart: Build a Java app to manage Azure Cosmos DB for NoSQL data
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
ms.devlang: javascript Last updated 02/21/2023-+ # Quickstart - Azure Cosmos DB for NoSQL client library for Node.js
cosmos-db Quickstart Template Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-bicep.md
Last updated 04/18/2022-+ #Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
cosmos-db Quickstart Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-template-json.md
Last updated 08/26/2021-+ #Customer intent: As a database admin who is new to Azure, I want to use Azure Cosmos DB to store and manage my data.
cosmos-db Quickstart Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-terraform.md
description: Quickstart showing how to an Azure Cosmos DB database and a contain
tags: azure-resource-manager, terraform-+
cosmos-db Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-resource-manager-templates.md
description: Use Azure Resource Manager templates to create and configure Azure
-+ Last updated 08/26/2021
cosmos-db Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-terraform.md
description: Use Terraform to create and configure Azure Cosmos DB for NoSQL.
-+ Last updated 09/16/2022
cosmos-db Tutorial Deploy App Bicep Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-deploy-app-bicep-aks.md
Title: 'Tutorial: Deploy an ASP.NET web application using Azure Cosmos DB for No
description: Learn how to deploy an ASP.NET MVC web application with Azure Cosmos DB for NoSQL, managed identity, and Azure Kubernetes Service by using Bicep. -+
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 03/23/2023 Last updated : 04/11/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that change cluster internals, such as installing a [new minor PostgreSQ
### April 2023
+* General availability: [Representational State Transfer (REST) APIs](/rest/api/postgresqlhsc/) are now fully supported for all cluster management operations.
+* General availability: [Bicep](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-bicep) and [ARM templates](/azure/templates/microsoft.dbforpostgresql/servergroupsv2?pivots=deployment-language-arm-template) for Azure Cosmos DB for PostgreSQL's serverGroupsv2 resource type.
* Public preview: Data Encryption at rest using [Customer Managed Keys](./concepts-customer-managed-keys.md) is now supported for all available regions. * See [this guide](./how-to-customer-managed-keys.md) for the steps to enable data encryption using customer managed keys.
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
Last updated 03/31/2023 -+ # Restore an Azure Cosmos DB account that uses continuous backup mode
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
Last updated 05/02/2022-+ # Use Azure CLI to create a API for Cassandra account, keyspace, and table with autoscale
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
-+ Last updated 02/21/2022
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
-+ Last updated 02/21/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
-+ Last updated 02/21/2022
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
-+ Last updated 02/21/2022
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/free-tier.md
-+ Last updated 07/08/2022
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
-+ Last updated 02/21/2022
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
-+ Last updated 02/21/2022
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
-+ Last updated 02/21/2022
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
-+ Last updated 02/21/2022
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
-+ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
Last updated 05/02/2022-+ # Use Azure CLI to create a API for Gremlin account, database, and graph with autoscale
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
-+ Last updated 02/21/2022
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
-+ Last updated 02/21/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
Last updated 05/02/2022-+ # Use Azure CLI to create a Gremlin serverless account, database, and graph
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
-+ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
-+ Last updated 02/21/2022
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
-+ Last updated 02/21/2022
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
-+ Last updated 02/21/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
-+ Last updated 02/21/2022
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
-+ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/autoscale.md
Last updated 06/22/2022-+ # Create an Azure Cosmos DB for NoSQL account, database, and container with autoscale
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/create.md
-+ Last updated 02/21/2022
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/lock.md
-+ Last updated 02/21/2022
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/serverless.md
+ Last updated 02/21/2022
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/throughput.md
-+ Last updated 02/21/2022
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
Last updated 06/22/2022-+ # Use Azure CLI to create an Azure Cosmos DB for Table account and table with autoscale
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
-+ Last updated 02/21/2022
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
Last updated 06/16/2022-+ # Use Azure CLI for resource lock operations on Azure Cosmos DB for Table tables
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
Last updated 06/16/2022-+ # Use Azure CLI to create an Azure Cosmos DB for Table serverless account and table
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
-+ Last updated 02/21/2022
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
description: Learn how to create a container in Azure Cosmos DB for Table by usi
-+ Last updated 10/16/2020
cosmos-db Manage With Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/manage-with-bicep.md
description: Use Bicep to create and configure Azure Cosmos DB for Table.
-+ Last updated 09/13/2021
cosmos-db Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/powershell-samples.md
description: Get the Azure PowerShell samples to perform common tasks in Azure
-+ Last updated 01/20/2021
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
ms.devlang: csharp Last updated 08/22/2022-+ # Quickstart: Azure Cosmos DB for Table for .NET
cosmos-db Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/resource-manager-templates.md
description: Use Azure Resource Manager templates to create and configure Azure
-+ Last updated 05/19/2020
cost-management-billing Cost Management Api Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/cost-management-api-permissions.md
Last updated 07/15/2022
+
Service principal support extends to Azure-specific scopes, like management grou
## Next steps -- Learn more about Cost Management automation at [Cost Management automation overview](automation-overview.md).
+- Learn more about Cost Management automation at [Cost Management automation overview](automation-overview.md).
cost-management-billing Get Usage Data Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-usage-data-azure-cli.md
Last updated 07/15/2022
+
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
Last updated 08/26/2022-+ # Quickstart: Create a budget with Bicep
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
Last updated 04/05/2023-+ # Quickstart: Create a budget with an ARM template
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
-+ # Tutorial: Create and manage Azure budgets
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 03/07/2023 Last updated : 04/13/2023
This article provides an overview of enterprise transfers.
## Transfer an enterprise account to a new enrollment
-An account transfer moves an account owner from one enrollment to another. All related subscriptions under the account owner will move to the target enrollment. Use an account transfer when you have multiple active enrollments and only want to move selected account owners.
+An account transfer moves an account owner from one enrollment to another. All related subscriptions under the account owner move to the target enrollment. Use an account transfer when you have multiple active enrollments and only want to move selected account owners.
-This section is for informational purposes only as the action can't be performed by an enterprise administrator. A support request is needed to transfer an enterprise account to a new enrollment.
+This section is for informational purposes only. An enterprise administrator doesn't perform the transfer actions. A support request is needed to transfer an enterprise account to a new enrollment.
Keep the following points in mind when you transfer an enterprise account to a new enrollment:
Other points to keep in mind before an account transfer:
- Your account shows the end date corresponding to the effective transfer date on the source enrollment. The same date is the start date on the target enrollment. - Your account usage incurred before the effective transfer date remains under the source enrollment.
-## Transfer enterprise enrollment to a new one
+## Transfer old enrollment to a new enrollment
An enrollment transfer is considered when:
An enrollment transfer is considered when:
- An enrollment is in expired/extended status and a new agreement is negotiated. - You have multiple enrollments and want to combine all the accounts and billing under a single enrollment.
-This section is for informational purposes only as the action can't be performed by an enterprise administrator. A support request is needed to transfer an enterprise enrollment to a new one, unless the enrollment qualifies for [Auto enrollment transfer](#auto-enrollment-transfer).
+This section is for informational purposes only. An enterprise administrator doesn't perform the transfer actions. A support request is needed to transfer an enterprise enrollment to a new one, unless the enrollment qualifies for [Auto enrollment transfer](#auto-enrollment-transfer).
-When you request to transfer an entire enterprise enrollment to an enrollment, the following actions occur:
+When you request to transfer an old enterprise enrollment to a new enrollment, the following actions occur:
- Usage transferred may take up to 72 hours to be reflected in the new enrollment.-- If department administrator (DA) or account owner (AO) view charges were enabled on the transferred enrollment, they must be enabled on the new enrollment.-- If you're using API reports or Power BI, generate a new API key under your new enrollment.
- - For reporting, all APIs use either the old enrollment or the new one, not both. If you need reporting from APIs for the old and new enrollments, you must create your own reports.
+- If department administrator (DA) or account owner (AO) view charges were enabled on the old transferred enrollment, they must be enabled on the new enrollment.
+- If you're using API reports or Power BI, [generate a new API access key](enterprise-rest-apis.md#api-key-generation) under your new enrollment. For API use, the API access key is used for authentication to older enterprise APIs that are retiring. For more information about retiring APIs that use the API access key, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
+ - All APIs use either the old enrollment or the new one, not both, for reporting purposes. If you need reports from APIs for the old and new enrollments, you must create your own reports.
- All Azure services, subscriptions, accounts, departments, and the entire enrollment structure, including all EA department administrators, transfer to a new target enrollment.-- The enrollment status is set to _Transferred_. The transferred enrollment is available for historic usage reporting purposes only.-- You can't add roles or subscriptions to a transferred enrollment. Transferred status prevents more usage against the enrollment.
+- The enrollment status is set to `Transferred` for the old enrollment. The old enrollment that was transferred is available for historic usage reporting purposes only.
+- You can't add roles or subscriptions to the old enrollment that was transferred. `Transferred` status prevents any new usage against the old enrollment.
- Any remaining Azure Prepayment balance in the agreement is lost, including future terms.-- If the enrollment you're transferring from has reservation purchases, the historic (past) reservation purchasing fee will remain in the source enrollment. All future purchasing fees transfer to the new enrollment. Additionally, all reservation benefits will be transferred across for use in the new enrollment.-- The historic marketplace one-time purchase fee and any monthly fixed fees already incurred on the old enrollment aren't transferred to the new enrollment. Consumption-based marketplace charges will be transferred.
+- If the old enrollment that you're transferring from has any reservation purchases, the historic (past) reservation purchasing fee remains in the old source enrollment. All future purchasing fees transfer to the new enrollment. Additionally, all reservation benefits are transferred across for use in the new enrollment.
+- The historic marketplace one-time purchase fee and any monthly fixed fees already incurred on the old enrollment aren't transferred to the new enrollment. Consumption-based marketplace charges are transferred.
### Effective transfer date
Other points to keep in mind before an enrollment transfer:
- Approval from both target and source enrollment EA Administrators is required. - If an enrollment transfer doesn't meet your requirements, consider an account transfer.-- The source enrollment status will be updated to transferred and will only be available for historic usage reporting purposes.
+- The source enrollment status is updated to `Transferred` and is available for historic usage reporting purposes only.
- There's no downtime during an enrollment transfer. - Usage may take up to 24 - 48 hours to be reflected in the target enrollment. - Cost view settings for department administrators or account owners don't carry over. - If previously enabled, settings must be enabled for the target enrollment. - Any API keys used in the source enrollment must be regenerated for the target enrollment.-- If the source and destination enrollments are on different cloud instances, the transfer will fail. Support personnel can transfer only within the same cloud instance.
+- If the source and destination enrollments are on different cloud instances, the transfer fails. Support personnel can transfer only within the same cloud instance. Cloud instances are the global Azure cloud and individual national clouds. For more information about national clouds, see [National clouds](../../active-directory/develop/authentication-national-cloud.md).
- For reservations (reserved instances): - The enrollment or account transfer between different currencies affects monthly reservation purchases. The following image illustrates the effects. :::image type="content" source="./media/ea-transfers/cross-currency-reservation-transfer-effects.png" alt-text="Diagram illustrating the effects of cross currency reservation transfers." border="false" lightbox="./media/ea-transfers/cross-currency-reservation-transfer-effects.png":::
- - Whenever there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment at the time of next monthly payment for an individual reservation. This cancellation is intentional and affects only the monthly reservation purchases.
+ - When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of next monthly payment for an individual reservation. This cancellation is intentional and affects only the monthly reservation purchases.
- You may have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency. If you repurchase a reservation, the purchase term (one or three years) is reset. The repurchase doesn't continue under the previous term.-- In case of backdated enrollment transfer, Savings plan benefit is applicable from the transfer request submission date and not from the effective transfer date.-
+- If there's a backdated enrollment transfer, any savings plan benefit is applicable from the transfer request submission date - not from the effective transfer date.
### Auto enrollment transfer
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Last updated 04/07/2023 -+ # Programmatically create Azure Enterprise Agreement subscriptions with the latest APIs
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
Last updated 03/27/2023 -+ # Programmatically create Azure subscriptions for a Microsoft Customer Agreement with the latest APIs
cost-management-billing Programmatically Create Subscription Microsoft Partner Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-partner-agreement.md
Last updated 04/05/2023 -+ # Programmatically create Azure subscriptions for a Microsoft Partner Agreement with the latest APIs
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 01/24/2023 Last updated : 04/12/2023
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - individual | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
-| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
+| MCA - individual | EA | ΓÇó For details, see [Transfer a subscription to an EA](mosp-ea-transfer.md#transfer-the-subscription-to-the-ea).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
| MCA - individual | MCA - Enterprise | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br>ΓÇó Self-service reservation and savings plan transfers are supported. | | MCA - Enterprise | MOSP | ΓÇó Requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | MCA - Enterprise | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers are supported. |
cost-management-billing Troubleshoot Reservation Transfers Between Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-transfers-between-tenants.md
-+ Previously updated : 12/06/2022 Last updated : 04/12/2023 # Change an Azure reservation directory between tenants
-This article helps reservation owners change a reservation order's directory from one Azure Active Directory tenant (directory) to another. When you change a reservation order's directory, it removes any Azure RBAC access to the reservation order and dependent reservations. Only you will have access after the change. Changing the directory doesn't change billing ownership for the reservation order. The directory is changed for the parent reservation order and dependent reservations.
+This article helps reservation owners change a reservation order's directory from one Azure Active Directory tenant (directory) to another. When you change a reservation order's directory, it removes any Azure RBAC access to the reservation order and dependent reservations. Only you have access after the change. Changing the directory doesn't change billing ownership for the reservation order. The directory is changed for the parent reservation order and dependent reservations.
A reservation exchange and cancellation isn't needed to change a reservation order's directory.
-After you change the directory of a reservation to another tenant, you might also want to add additional owners to the reservation. For more information, see [Who can manage a reservation by default](view-reservations.md#who-can-manage-a-reservation-by-default).
+After you change the directory of a reservation to another tenant, you might also want to add other owners to the reservation. For more information, see [Who can manage a reservation by default](view-reservations.md#who-can-manage-a-reservation-by-default).
When you change a reservation order's directory, all reservations under the order are transferred with it. ## Change a reservation order's directory
-Use the following steps to change a reservation order's directory and it's dependent reservations to another tenant.
+Use the following steps to change a reservation order's directory and its dependent reservations to another tenant.
1. Sign into the [Azure portal](https://portal.azure.com).
-1. If you're not a billing administrator but you are a reservation owner, navigate to **Reservations** and then skip to step 5.
+1. If you're not a billing administrator but you're a reservation owner, navigate to **Reservations,** and then skip to step 5.
1. Navigate to **Cost Management + Billing**. - If you're an EA admin, in the left menu, select **Billing scopes** and then in the list of billing scopes, select one. - If you're a Microsoft Customer Agreement billing profile owner, in the left menu, select **Billing profiles**. In the list of billing profiles, select one.
Use the following steps to change a reservation order's directory and it's depen
1. In the reservation order, select **Change directory**. 1. In the Change directory pane, select the Azure AD directory that you want to transfer the reservation to and then select **Confirm**.
+## Update reservation scope
+
+After the reservation moves the new tenant, you can change the reservation target scope to a *shared* or *management group scope*. For more information about changing the scope, see [Change the reservation scope](manage-reserved-vm-instance.md#change-the-reservation-scope). The following example explains how changing the scope might work.
+
+Currently, a reservation covers subscriptions A1 and A2. The reservation scope is set to either a shared scope or a management group scope.
+
+| Initial reservation location | Final reservation location |
+| | |
+| Tenant A | Tenant B |
+| Subscription A-1 | Subscription B-1 |
+| Subscription A-2 | Subscription B-2 |
+
+Assume that the reservation is set to a shared scope. When subscriptions B1 and B2 are under the same billing profile (for MCA) or enrollment (for EA), then B1 and B2 already receive the reservation benefit. Changing the tenant doesnΓÇÖt change the scope. In this situation, you donΓÇÖt have to change the scope after you change the reservation tenant.
+
+Assume that the reservation is set to a management group scope. After you change the reservation tenant, you need to change the reservationΓÇÖs scope to a management group scope that targets subscriptions B1 and B2.
+ ## Next steps - For more information about reservations, see [What are Azure Reservations?](save-compute-costs-reservations.md).
data-factory Azure Ssis Integration Runtime Standard Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md
description: Learn how to configure a virtual network for standard injection of
Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Azure Ssis Integration Runtime Virtual Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.md
description: Learn how to configure a virtual network for injection of Azure-SSI
Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Built In Preinstalled Components Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
Previously updated : 02/15/2022 Last updated : 04/12/2023 # Built-in and preinstalled components on Azure-SSIS Integration Runtime
data-factory Compare Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compare-versions.md
Previously updated : 01/31/2022 Last updated : 04/12/2023 # Compare Azure Data Factory with Data Factory version 1
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Previously updated : 02/15/2022 Last updated : 04/12/2023 # Integration runtime in Azure Data Factory
data-factory Configure Azure Ssis Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
Title: Configure performance for the Azure-SSIS Integration Runtime description: Learn how to configure the properties of the Azure-SSIS Integration Runtime for high performance Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Configure Bcdr Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
Previously updated : 02/15/2022 Last updated : 04/12/2023 # Configure Azure-SSIS integration runtime for business continuity and disaster recovery (BCDR)
data-factory Connector Quickbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md
Previously updated : 02/28/2022 Last updated : 04/12/2023 # Transform data in Quickbase (Preview) using Azure Data Factory or Synapse Analytics
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md
Previously updated : 03/25/2022 Last updated : 04/12/2023 # Copy and transform data in SFTP server using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Smartsheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md
Previously updated : 02/28/2022 Last updated : 04/12/2023 # Transform data in Smartsheet (Preview) using Azure Data Factory or Synapse Analytics
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
Previously updated : 02/25/2022 Last updated : 04/12/2023 # Transform data in TeamDesk (Preview) using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-google-adwords.md
Previously updated : 02/23/2022 Last updated : 04/12/2023
data-factory Connector Zendesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md
Previously updated : 02/28/2022 Last updated : 04/12/2023 # Transform data in Zendesk (Preview) using Azure Data Factory or Synapse Analytics
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md
Previously updated : 02/25/2022 Last updated : 04/12/2023 # Build large-scale data copy pipelines with metadata-driven approach in copy data tool
data-factory Create Azure Ssis Integration Runtime Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-portal.md
description: Learn how to create an Azure-SSIS integration runtime in Azure Data
Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Create Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-powershell.md
description: Learn how to create an Azure-SSIS integration runtime in Azure Data
Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Create Azure Ssis Integration Runtime Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-resource-manager-template.md
Title: Use an Azure Resource Manager template to create an integration runtime
description: Learn how to use an Azure Resource Manager template to create an Azure-SSIS integration runtime in Azure Data Factory so you can deploy and run SSIS packages in Azure. + Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime.md
description: Learn how to create an Azure-SSIS integration runtime in Azure Data
Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Cross Tenant Connections To Azure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/cross-tenant-connections-to-azure-devops.md
Previously updated : 02/24/2022 Last updated : 04/12/2023 # Cross-tenant connections to Azure DevOps
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
Last updated 08/09/2022-+ # Create a trigger that runs a pipeline on a schedule
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
ms.devlang: powershell Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
This section shows you how to pass meta data information from trigger to pipelin
1. Create or attach a trigger to the pipeline, and select **OK**
-1. In the following page, fill in trigger meta data for each parameter. Use format defined in [System Variable](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. For instance, here we assign trigger run start time to *parameter_1*.
+1. After selecting **OK**, another **New trigger** page is presented with a list of the parameters specified for the pipeline, as shown in the following screenshot. On that page, fill in trigger meta data for each parameter. Use format defined in [System Variable](control-flow-system-variables.md) to retrieve trigger information. You don't need to fill in the information for all parameters, just the ones that will assume trigger metadata values. For instance, here we assign trigger run start time to *parameter_1*.
:::image type="content" source="media/how-to-use-trigger-parameterization/02-pass-in-system-variable.png" alt-text="Screenshot of trigger definition page showing how to pass trigger information to pipeline parameters.":::
data-factory Join Azure Ssis Integration Runtime Virtual Network Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md
Title: Join Azure-SSIS integration runtime to a virtual network via Azure portal
description: Learn how to join Azure-SSIS integration runtime to a virtual network via Azure portal. + Last updated 08/12/2022
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
1. Select **+ New Alert Rule** to create a new alert.
- :::image type="content" source="media/monitor-using-azure-monitor/alerts_image4.png" alt-text="Screenshot that shows creating a new alert rule.":::
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image4.png" lightbox="media/monitor-using-azure-monitor/alerts_image4.png" alt-text="Screenshot that shows creating a new alert rule.":::
1. Define the alert condition.
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
:::image type="content" source="media/monitor-using-azure-monitor/alerts_image5.png" alt-text="Screenshot that shows the selections for opening the pane for choosing a resource.":::
- :::image type="content" source="media/monitor-using-azure-monitor/alerts_image6.png" alt-text="Screenshot that shows the selections for opening the pane for configuring signal logic.":::
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image6.png" lightbox="media/monitor-using-azure-monitor/alerts_image6.png" alt-text="Screenshot that shows the selections for opening the pane for configuring signal logic.":::
- :::image type="content" source="media/monitor-using-azure-monitor/alerts_image7.png" alt-text="Screenshot that shows configuring the signal logic.":::
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image7.png" lightbox="media/monitor-using-azure-monitor/alerts_image7.png" alt-text="Screenshot that shows configuring the signal logic.":::
1. Define the alert details.
- :::image type="content" source="media/monitor-using-azure-monitor/alerts_image8.png" alt-text="Screenshot that shows alert details.":::
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image8.png" lightbox="media/monitor-using-azure-monitor/alerts_image8.png" alt-text="Screenshot that shows alert details.":::
1. Define the action group.
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
:::image type="content" source="media/monitor-using-azure-monitor/alerts_image11.png" alt-text="Screenshot that shows configuring email, SMS, push, and voice.":::
- :::image type="content" source="media/monitor-using-azure-monitor/alerts_image12.png" alt-text="Screenshot that shows defining an action group.":::
+ :::image type="content" source="media/monitor-using-azure-monitor/alerts_image12.png" lightbox="media/monitor-using-azure-monitor/alerts_image12.png" alt-text="Screenshot that shows defining an action group.":::
## Next steps
data-factory Monitor Shir In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-shir-in-azure.md
Previously updated : 02/22/2022 Last updated : 04/12/2023
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
tags: azure-resource-manager
-+ Last updated 08/19/2022
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
-+ Last updated 10/25/2022
data-factory Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/samples-powershell.md
Title: Azure PowerShell Samples for Azure Data Factory
description: Azure PowerShell Samples - Scripts to help you create and manage data factories. +
The following table includes links to sample Azure PowerShell scripts for Azure
|[Transform data using a Spark cluster](scripts/transform-data-spark-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)| This PowerShell script transforms data by running a program on a Spark cluster. | |**Lift and shift SSIS packages to Azure**|| |[Create Azure-SSIS integration runtime](scripts/deploy-azure-ssis-integration-runtime-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)| This PowerShell script provisions an Azure-SSIS integration runtime that runs SQL Server Integration Services (SSIS) packages in Azure. |---
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
Previously updated : 11/04/2022 Last updated : 04/11/2023
You created managed private endpoint from ADF and obtained an approved private e
#### Cause
-Currently, ADF stops pulling private end point status after the it is approved. Hence the status shown in ADF is stale.
+Currently, ADF stops pulling private end point status after it is approved. Hence the status shown in ADF is stale.
##### Resolution
Try to enable public network access on the user interface, as shown in the follo
#### Cause Both Azure Resource Manager and the service are using the same private zone creating a potential conflict on customerΓÇÖs private DNS with a scenario where the Azure Resource Manager records will not be found.
-#### Solution
+#### Resolution
1. Find Private DNS zones **privatelink.azure.com** in Azure portal. :::image type="content" source="media/security-access-control-troubleshoot-guide/private-dns-zones.png" alt-text="Screenshot of finding Private DNS zones."::: 2. Check if there is an A record **adf**.
For example: The Azure Blob Storage sink was using Azure IR (public, not Managed
The service may still use Managed VNet IR, but you could encounter such error because the public endpoint to Azure Blob Storage in Managed VNet is not reliable based on the testing result, and Azure Blob Storage and Azure Data Lake Gen2 are not supported to be connected through public endpoint from the service's Managed Virtual Network according to [Managed virtual network & managed private endpoints](./managed-virtual-network-private-endpoint.md#outbound-communications-through-public-endpoint-from-a-data-factory-managed-virtual-network).
-#### Solution
+#### Resolution
- Having private endpoint enabled on the source and also the sink side when using the Managed VNet IR. - If you still want to use the public endpoint, you can switch to public IR only instead of using the Managed VNet IR for the source and the sink. Even if you switch back to public IR, the service may still use the Managed VNet IR if the Managed VNet IR is still there.
The service may still use Managed VNet IR, but you could encounter such error be
If you are performing any operations related to CMK, you should complete all operations related to the service first, and then external operations (like Managed Identities or Key Vault operations). For example, if you want to delete all resources, you need to delete the service instance first, and then delete the key vault. If you delete the key vault first, this error will occur since the service can't read the required objects anymore, and it won't be able to validate if deletion is possible or not.
-#### Solution
+#### Resolution
There are three possible ways to solve the issue. They are as follows:
data-factory Solution Template Migration S3 Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-migration-s3-azure.md
Previously updated : 01/31/2022 Last updated : 04/12/2023 # Migrate data from Amazon S3 to Azure Data Lake Storage Gen2
data-factory Ssis Azure Files File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-files-file-shares.md
Title: Open and save files with SSIS packages deployed in Azure description: Learn how to open and save files on premises and in Azure when you lift and shift SSIS packages that use local file systems into SSIS in Azure Previously updated : 02/15/2022 Last updated : 04/12/2023
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
Previously updated : 07/05/2021 Last updated : 04/12/2023 # Incrementally copy data from Azure SQL Database to Blob Storage by using change tracking in the Azure portal
data-factory Tutorial Pipeline Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-return-value.md
Previously updated : 2/12/2022 Last updated : 04/12/2023
data-factory Update Machine Learning Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/update-machine-learning-models.md
-+ Last updated 09/26/2022
data-factory Data Factory Amazon Redshift Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-amazon-redshift-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Amazon Simple Storage Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-amazon-simple-storage-service-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Api Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-api-change-log.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Azure Data Factory - .NET API change log
data-factory Data Factory Azure Blob Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-blob-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Copy Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-copy-wizard.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Datalake Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-datalake-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Documentdb Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-documentdb-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data to and from Azure Cosmos DB using Azure Data Factory
data-factory Data Factory Azure Ml Batch Execution Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-batch-execution-activity.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Create predictive pipelines using Machine Learning Studio (classic) and Azure Data Factory
data-factory Data Factory Azure Ml Update Resource Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-ml-update-resource-activity.md
+ Previously updated : 10/22/2021 Last updated : 04/12/2023 # Updating ML Studio (classic) models using Update Resource Activity
The pipeline has two activities: **AzureMLBatchExecution** and **AzureMLUpdateRe
"end": "2016-02-14T00:00:00Z" } }
-```
+```
data-factory Data Factory Azure Search Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-search-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-sql-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Sql Data Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-sql-data-warehouse-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Azure Table Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-table-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Build Your First Pipeline Using Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-arm.md
+ Previously updated : 10/22/2021 Last updated : 04/12/2023 # Tutorial: Build your first Azure data factory using Azure Resource Manager template
data-factory Data Factory Build Your First Pipeline Using Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-editor.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Tutorial: Build your first data factory by using the Azure portal
data-factory Data Factory Build Your First Pipeline Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-powershell.md
Previously updated : 04/18/2022 Last updated : 04/12/2023 # Tutorial: Build your first Azure data factory using Azure PowerShell
data-factory Data Factory Build Your First Pipeline Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-rest-api.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Tutorial: Build your first Azure data factory using Data Factory REST API
data-factory Data Factory Build Your First Pipeline Using Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-vs.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Tutorial: Create a data factory by using Visual Studio
data-factory Data Factory Build Your First Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-build-your-first-pipeline.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Tutorial: Build your first pipeline to transform data using Hadoop cluster
data-factory Data Factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-compute-linked-services.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Compute environments supported by Azure Data Factory version 1
data-factory Data Factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-fault-tolerance.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-performance.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Tutorial Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-azure-resource-manager-template.md
description: In this tutorial, you create an Azure Data Factory pipeline by usin
+ Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Tutorial Using Dotnet Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-dotnet-api.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Tutorial Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-powershell.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Tutorial Using Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-rest-api.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Activity Tutorial Using Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-activity-tutorial-using-visual-studio.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Data From Azure Blob Storage To Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Data Wizard Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-data-wizard-tutorial.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Copy Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-copy-wizard.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Create Data Factories Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-create-data-factories-programmatically.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Create Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-create-datasets.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Datasets in Azure Data Factory (version 1)
data-factory Data Factory Create Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-create-pipelines.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Customer Case Studies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-customer-case-studies.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Azure Data Factory - Customer case studies
data-factory Data Factory Customer Profiling Usecase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-customer-profiling-usecase.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Use Case - Customer Profiling
data-factory Data Factory Data Management Gateway High Availability Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-management-gateway-high-availability-scalability.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Data Management Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-management-gateway.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Data Movement Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-activities.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-security-considerations.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Data Processing Using Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-processing-using-batch.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Data Transformation Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-transformation-activities.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Transform data in Azure Data Factory version 1
data-factory Data Factory Ftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-ftp-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Functions Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-functions-variables.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Azure Data Factory - Functions and System Variables
data-factory Data Factory Gateway Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-gateway-release-notes.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Hadoop Streaming Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-hadoop-streaming-activity.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Transform data using Hadoop Streaming Activity in Azure Data Factory
data-factory Data Factory Hdfs Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-hdfs-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Hive Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-hive-activity.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Transform data using Hive Activity in Azure Data Factory
data-factory Data Factory How To Use Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-how-to-use-resource-manager-templates.md
+ Previously updated : 10/22/2021 Last updated : 04/12/2023 # Use templates to create Azure Data Factory entities
data-factory Data Factory Http Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-http-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data from an HTTP source by using Azure Data Factory
data-factory Data Factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-introduction.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Introduction to Azure Data Factory V1
data-factory Data Factory Invoke Stored Procedure From Copy Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-invoke-stored-procedure-from-copy-activity.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-json-scripting-reference.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Load Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-load-sql-data-warehouse.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Map Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-map-columns.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Map Reduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-map-reduce.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Invoke MapReduce Programs from Data Factory
data-factory Data Factory Monitor Manage App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-monitor-manage-app.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Monitor and manage Azure Data Factory pipelines by using the Monitoring and Management app
data-factory Data Factory Monitor Manage Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-monitor-manage-pipelines.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Move Data Between Onprem And Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-move-data-between-onprem-and-cloud.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data between on-premises sources and the cloud with Data Management Gateway
data-factory Data Factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-naming-rules.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Rules for naming Azure Data Factory entities
data-factory Data Factory Odata Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-odata-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Odbc Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-odbc-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory On Premises Mongodb Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-on-premises-mongodb-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data From MongoDB using Azure Data Factory
data-factory Data Factory Onprem Cassandra Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-cassandra-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Onprem Db2 Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-db2-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Onprem File System Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-file-system-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Onprem Mysql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-mysql-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Onprem Oracle Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-oracle-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Onprem Postgresql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-postgresql-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data from PostgreSQL using Azure Data Factory
data-factory Data Factory Onprem Sybase Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-sybase-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data from Sybase using Azure Data Factory
data-factory Data Factory Onprem Teradata Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-onprem-teradata-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Pig Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-pig-activity.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Transform data using Pig Activity in Azure Data Factory
data-factory Data Factory Product Reco Usecase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-product-reco-usecase.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Use Case - Product Recommendations
data-factory Data Factory Repeatable Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-repeatable-copy.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Salesforce Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-salesforce-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-samples.md
+ Previously updated : 10/22/2021 Last updated : 04/12/2023 # Azure Data Factory - Samples
data-factory Data Factory Sap Business Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sap-business-warehouse-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data From SAP Business Warehouse using Azure Data Factory
data-factory Data Factory Sap Hana Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sap-hana-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data From SAP HANA using Azure Data Factory
data-factory Data Factory Scheduling And Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-scheduling-and-execution.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Data Factory scheduling and execution
data-factory Data Factory Sftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sftp-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data from an SFTP server using Azure Data Factory
data-factory Data Factory Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-spark.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Invoke Spark programs from Azure Data Factory pipelines
data-factory Data Factory Sqlserver Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-sqlserver-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023 # Move data to and from SQL Server using Azure Data Factory
data-factory Data Factory Stored Proc Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-stored-proc-activity.md
description: Learn how you can use the SQL Server Stored Procedure Activity to i
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Supported File And Compression Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-supported-file-and-compression-formats.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Troubleshoot Gateway Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-troubleshoot-gateway-issues.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-troubleshoot.md
description: Learn how to troubleshoot issues with using Azure Data Factory.
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Use Custom Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-use-custom-activities.md
description: Learn how to create custom activities and use them in an Azure Data
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Usql Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-usql-activity.md
description: Learn how to process or transform data by running U-SQL scripts on
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory Data Factory Web Table Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-web-table-connector.md
Previously updated : 10/22/2021 Last updated : 04/12/2023
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/how-to-invoke-ssis-package-stored-procedure-activity.md
ms.devlang: powershell Previously updated : 10/22/2021 Last updated : 04/12/2023 # Invoke an SSIS package using stored procedure activity in Azure Data Factory
data-lake-analytics Data Lake Analytics Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-add-users.md
Title: Add users to an Azure Data Lake Analytics account description: Learn how to correctly add users to your Data Lake Analytics account using the Add User Wizard and Azure PowerShell. + Last updated 01/20/2023
data-lake-store Data Lake Store Get Started Cli 2.0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-get-started-cli-2.0.md
description: Use the Azure CLI to create a Data Lake Storage Gen1 account and pe
+ Last updated 06/27/2018 - # Get started with Azure Data Lake Storage Gen1 using the Azure CLI
When prompted, enter **Y** to delete the account.
* [Use Azure Data Lake Storage Gen1 for big data requirements](data-lake-store-data-scenarios.md) * [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) * [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md)
-* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
+* [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)
data-lake-store Data Lake Store Hdinsight Hadoop Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-resource-manager-template.md
description: Use Azure Resource Manager templates to create and use Azure HDInsi
+ Last updated 05/29/2018
data-lake-store Data Lake Store Performance Tuning Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-performance-tuning-powershell.md
description: Tips on how to improve performance when using Azure PowerShell with
+ Last updated 01/09/2018 - # Performance tuning guidance for using PowerShell with Azure Data Lake Storage Gen1
You can continue to tune these settings by changing the **PerFileThreadCount** u
* [Secure data in Data Lake Storage Gen1](data-lake-store-secure-data.md) * [Use Azure Data Lake Analytics with Data Lake Storage Gen1](../data-lake-analytics/data-lake-analytics-get-started-portal.md) * [Use Azure HDInsight with Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md)-
data-manager-for-agri How To Set Up Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md
The following table lists the **operationName** values and corresponding REST AP
|Microsoft.AgFoodPlatform/deletionJobs/seasonalFieldsCascadeDeletionJobs/processed ### ApplicationAuditLogs
-The operation names corresponding to write operations in other categories are present in this category. These common logs don't contain the request body. These common logs can be correlated using the correlationId field. Some of the control plane operations that aren't part of the rest of the categories are listed below.
+The write and delete logs present in other categories are also present in this category. The difference between the logs in this category and other categories for the same API call is that, ApplicationAuditLogs doesn't log the request-body, while in other categories the request-body is populated. Use the correlation-id to relate logs of two different categories to get more details. Some of the control plane operations that aren't part of the rest of the categories are listed below.
|operationName| | |
data-manager-for-agri Overview Azure Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md
# What is Azure Data Manager for Agriculture Preview? > [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
> > Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
data-manager-for-agri Quickstart Install Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/quickstart-install-data-manager-for-agriculture.md
Use this document to get started with the steps to install Data Manager for Agriculture. Make sure that your Azure subscription ID is in our allowlist. Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Azure Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager). > [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
## 1: Register resource provider
data-manager-for-agri Supplemental Terms Azure Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/supplemental-terms-azure-data-manager-for-agriculture.md
+
+# Mandatory fields.
+ Title: Supplemental Terms of Use for Microsoft Azure Preview for Azure Data Manager for Agriculture.
+description: Provides Azure Data Manager for Agriculture specific terms of use.
++ Last updated : 4/13/2023++++
+# Supplemental Terms of Use for Microsoft Azure Previews
+Azure may include preview, beta, or other prerelease features, services, software, or regions offered by Microsoft for optional evaluation ("Previews"). Previews are licensed to you as part of [**your agreement**](https://azure.microsoft.com/support/legal/) governing use of Azure, and subject to terms applicable to "Previews".
+
+Certain named Previews are subject to additional terms set forth below, if any. These Previews are made available to you pursuant to these additional terms, which supplement your agreement governing use of Azure. Capitalized terms not defined herein shall have the meaning set forth in ΓÇ»[**your agreement**](https://azure.microsoft.com/support/legal/). If you do not agree to these terms, do not use the Preview(s).
+
+## Azure Data Manager for Agriculture (Preview)
+
+### Connector to a Provider Service
+A connector to a third partyΓÇÖs (ΓÇ£ProviderΓÇ¥) software or service (ΓÇ£Provider ServiceΓÇ¥) enables the transfer of data between Azure Data Manager for Agriculture (ΓÇ£ADMAΓÇ¥) and the Provider Service, and may facilitate the execution of workloads on or with the Provider Service. By providing Credential Information for the Provider Service to Microsoft, Customer is authorizing Microsoft to transfer data between ADMA and the Provider Service, facilitate the execution of a workload on or with the Provider Service, and receive and return associated results. ΓÇ£Credential InformationΓÇ¥ means data that enables access to a Provider Service to enable the transfer of data from the Provider Service to a Microsoft Online Service, and may include but is not limited to, credentials, assess and/or refresh tokens, keys, secrets and any other required data to authenticate, search, scope and retrieve data from the Provider Service.
+
+#### Access to Provider Service
+Customer acknowledges and agrees that: (i) Customer has an existing agreement with the Provider to use the Provider Service, (ii) Customer is responsible for any associated fees charged by the Provider, (iii) the Provider Service is not maintained, owned or operated by Microsoft and that Microsoft does not control, and is not responsible for the privacy, security, or compliance practices of the Provider, (vi) once enabled, Microsoft may initiate the continued transfer of such data by using the Credential Information Customer provided, and (v) Microsoft will continue transferring such data from the Provider Service until Customer disables the corresponding connector.
+
+#### Right to use data from Provider Service
+Customer represents and warrants that Customer has the right to: (i) transfer data stored or managed by Provider to ADMA, and (ii) process such data in ADMA and any other Microsoft Online Service.
data-share Move To New Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/move-to-new-region.md
Title: Move Azure Data Share Accounts to another Azure region using the Azure portal description: Use Azure Resource Manager template to move Azure Data Share account from one Azure region to another using the Azure portal. + Last updated 10/27/2022 - #Customer intent: As an Azure Data Share User, I want to move my Data Share account to a new region.
data-share Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/samples-powershell.md
Title: Azure PowerShell Samples for Azure Data Share description: Learn about Azure PowerShell Sample scripts to help you create and manage data shares in Azure Data Share. +
The following table includes links to sample Azure PowerShell scripts for Azure
|[View details of a data shares](scripts/powershell/view-share-details-powershell.md)| This sample PowerShell script lists and retrieves details of data shares. | |[Monitor usage of shared data](scripts/powershell/monitor-usage-powershell.md)| This sample PowerShell script monitors the usage of sent shared data. | |[Create and view snapshot triggers](scripts/powershell/create-view-trigger-powershell.md)| This sample PowerShell script creates snapshot triggers of a share.------
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-arm.md
Last updated 10/27/2022-+ # Quickstart: Share data using Azure Data Share and ARM template
data-share Share Your Data Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data-bicep.md
Last updated 10/27/2022-+ # Quickstart: Share data using Azure Data Share and Bicep
databox-online Azure Stack Edge Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-alerts.md
+ Last updated 10/14/2021
databox-online Azure Stack Edge Gpu Back Up Virtual Machine Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-back-up-virtual-machine-disks.md
-+ Last updated 06/25/2021
databox-online Azure Stack Edge Gpu Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-connect-resource-manager.md
Last updated 06/09/2021 -+ #Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources.
databox-online Azure Stack Edge Gpu Create Certificates Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-certificates-powershell.md
+ Last updated 06/01/2021
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md
+ Last updated 05/24/2022
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
## Next steps
-[Deploy VMs using Azure PowerShell cmdlets](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
+[Deploy VMs using Azure PowerShell cmdlets](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
databox-online Azure Stack Edge Gpu Deploy Vm Specialized Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-vm-specialized-image-powershell.md
+ Last updated 04/15/2021
This article used only one resource group to create all the VM resource. Deletin
- [Prepare a generalized image from a Windows VHD to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md) - [Prepare a generalized image from an ISO to deploy VMs on Azure Stack Edge Pro GPU](azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md)
-d
+d
databox-online Azure Stack Edge Gpu Local Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-local-resource-manager-overview.md
+ Last updated 06/30/2021
databox-online Azure Stack Edge Gpu Manage Virtual Machine Tags Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-manage-virtual-machine-tags-powershell.md
-+ Last updated 07/12/2021
databox-online Azure Stack Edge Gpu Set Azure Resource Manager Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-set-azure-resource-manager-password.md
+ Last updated 02/21/2021
databox-online Azure Stack Edge Gpu System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-system-requirements.md
Previously updated : 12/06/2022 Last updated : 04/13/2023 -+ # System requirements for Azure Stack Edge Pro with GPU
databox-online Azure Stack Edge Gpu Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
Previously updated : 11/02/2022 Last updated : 04/12/2023
The Azure Stack Edge Pro device has the following specifications for compute and
| CPU type | Dual Intel Xeon Silver 4214 (Cascade Lake) CPU | | CPU: raw | 24 total cores, 48 total vCPUs | | CPU: usable | 40 vCPUs |
-| Memory type | Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM |
-| Memory: raw | 128 GB RAM (8 x 16 GB) |
-| Memory: usable | 96 GB RAM |
+| Memory type | Dell Compatible 16 GiB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM |
+| Memory: raw | 128 GiB RAM (8 x 16 GiB) |
+| Memory: usable | 96 GiB RAM |
## Compute acceleration specifications
databox-online Azure Stack Edge Gpu Troubleshoot Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-azure-resource-manager.md
Last updated 06/10/2021 -+ # Troubleshoot Azure Resource Manager issues on an Azure Stack Edge device
For general issues with Azure Resource Manager, make sure that your device and t
## Next steps - [Troubleshoot device activation issues](azure-stack-edge-gpu-troubleshoot-activation.md).-- [Troubleshoot device issues](azure-stack-edge-gpu-troubleshoot.md).
+- [Troubleshoot device issues](azure-stack-edge-gpu-troubleshoot.md).
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
+ Last updated 02/05/2021
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
Previously updated : 06/29/2021 Last updated : 04/12/2023 # Azure Stack Edge Mini R technical specifications
The Azure Stack Edge Mini R device has the following specifications for compute
| CPU type | Intel Xeon-D 1577 | | CPU: raw | 16 total cores, 32 total vCPUs | | CPU: usable | 24 vCPUs |
-| Memory type | 16 GB 2400 MT/s SODIMM |
-| Memory: raw | 48 GB RAM (3 x 16 GB) |
-| Memory: usable | 32 GB RAM |
+| Memory type | 16 GiB 2400 MT/s SODIMM |
+| Memory: raw | 48 GiB RAM (3 x 16 GiB) |
+| Memory: usable | 32 GiB RAM |
## Compute acceleration
A Vision Processing Unit (VPU) is included on every Azure Stack Edge Mini R devi
## Storage
-The Azure Stack Edge Mini R device has 1 data disk and 1 boot disk (that serves as operating system storage). The following table shows the details for the storage capacity of the device.
+The Azure Stack Edge Mini R device has one data disk and one boot disk (that serves as operating system storage). The following table shows the details for the storage capacity of the device.
| Specification | Value | |--|--|
The following routers and switches are compatible with the 10 Gbps SPF+ network
## Transceivers, cables
-The following copper SFP+ (10 Gbps) transceivers and cables are strongly recommended for use with Azure Stack Edge Mini R devices. Compatible fiber-optic cables can be used with SFP+ network interfaces (Port 3 and Port 4) but have not been tested.
+The following copper SFP+ (10 Gbps) transceivers and cables are recommended for use with Azure Stack Edge Mini R devices. Compatible fiber-optic cables can be used with SFP+ network interfaces (Port 3 and Port 4) but haven't been tested.
|SFP+ transceiver type |Supported cables | Notes | |-|--|-|
-|SFP+ Direct-Attach Copper (10GSFP+Cu)| <ul><li>[FS SFP-10G-DAC](https://www.fs.com/c/fs-10g-sfp-dac-1115) (Available in industrial temperature -40┬║C to +85┬║C as custom order)</li><br><li>[10Gtek CAB-10GSFP-P0.5M](http://www.10gtek.com/10G-SFP+-182)</li><br><li>[Cisco SFP-H10GB-CU1M](https://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html)</li></ul> |<ul><li>Also known as SFP+ Twinax DAC cables.</li><br><li>Recommended option because it has lowest power usage and is simplest.</li><br><li>Autonegotiation is not supported.</li><br><li>Connecting an SFP device to an SFP+ device is not supported.</li></ul>|
+|SFP+ Direct-Attach Copper (10GSFP+Cu)| <ul><li>[FS SFP-10G-DAC](https://www.fs.com/c/fs-10g-sfp-dac-1115) (Available in industrial temperature -40┬║C to +85┬║C as custom order)</li><br><li>[10Gtek CAB-10GSFP-P0.5M](http://www.10gtek.com/10G-SFP+-182)</li><br><li>[Cisco SFP-H10GB-CU1M](https://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html)</li></ul> |<ul><li>Also known as SFP+ Twinax DAC cables.</li><br><li>Recommended option because it has lowest power usage and is simplest.</li><br><li>Autonegotiation isn't supported.</li><br><li>Connecting an SFP device to an SFP+ device isn't supported.</li></ul>|
## Power supply unit
This section lists the specifications related to the enclosure environment, such
| Temperature range | 0 ΓÇô 40┬░ C (operational) | | Vibration | MIL-STD-810 Method 514.7*<br> Procedure I CAT 4, 20 | | Shock | MIL-STD-810 Method 516.7*<br> Procedure IV, Logistic |
-| Altitude | Operational: 15,000 feet<br> Non-operational: 40,000 feet |
+| Altitude | Operational: 15,000 feet<br> Nonoperational: 40,000 feet |
**All references are to MIL-STD-810G Change 1 (2014)*
databox-online Azure Stack Edge Pro 2 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-system-requirements.md
+ Last updated 03/17/2023
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 11/09/2022 Last updated : 04/12/2023
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU| | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | 2 x 32 GB DDR4-2933 RDIMM |
-| Memory: raw | 64 GB RAM |
-| Memory: usable | 51 GB RAM |
+| Memory type | 2 x 32 GiB DDR4-2933 RDIMM |
+| Memory: raw | 64 GiB RAM |
+| Memory: usable | 48 GiB RAM |
| GPU | None | # [Model 128G4T1GPU](#tab/sku-b)
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU| | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | 4 x 32 GB DDR4-2933 RDIMM |
-| Memory: raw | 128 GB RAM |
-| Memory: usable | 102 GB RAM |
+| Memory type | 4 x 32 GiB DDR4-2933 RDIMM |
+| Memory: raw | 128 GiB RAM |
+| Memory: usable | 96 GiB RAM |
| GPU | 1 NVIDIA A2 GPU <br> For more information, see [NVIDIA A2 GPUs](https://www.nvidia.com/en-us/data-center/products/a2/). | # [Model 256G6T2GPU](#tab/sku-c)
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU| | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | 4 x 64 GB DDR4-2933 RDIMM |
-| Memory: raw | 256 GB RAM |
-| Memory: usable | 204 GB RAM |
+| Memory type | 4 x 64 GiB DDR4-2933 RDIMM |
+| Memory: raw | 256 GiB RAM |
+| Memory: usable | 224 GiB RAM |
| GPU | 2 NVIDIA A2 GPUs <br> For more information, see [NVIDIA A2 GPUs](https://www.nvidia.com/en-us/data-center/products/a2/). |
Here are the details for the Mellanox card:
| Parameter | Description | |-|-| | Model | ConnectX®-6 Dx network interface card |
-| Model Description | 100 GbE dual-port QSFP56 |
+| Model Description | 100-GbE dual-port QSFP56 |
| Device Part Number | MCX623106AC-CDAT, with crypto or with secure boot | ## Storage specifications
The following table lists the dimensions of the shipping package in millimeters
|--||| | 1 | Model 64G2T | 21.0 | | | | |
-| 2 | Shipping weight, with 4-post mount | 35.3 |
-| 3 | Model 64G2T install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
+| 2 | Shipping weight, with four-post mount | 35.3 |
+| 3 | Model 64G2T install handling, four-post (without bezel and with inner rails attached) | 20.4 |
| | | |
-| 4 | Shipping weight, with 2-post mount | 32.1 |
-| 5 | Model 64G2T install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| 4 | Shipping weight, with two-post mount | 32.1 |
+| 5 | Model 64G2T install handling, two-post (without bezel and with inner rails attached) | 20.4 |
| | | | | 6 | Shipping weight with wall mount | 31.1 | | 7 | Model 64G2T install handling without bezel | 19.8 | | | | |
-| 8 | 4-post in box | 6.28 |
-| 9 | 2-post in box | 3.08 |
-| 10 | Wallmount as packaged | 2.16 |
+| 8 | four-post in box | 6.28 |
+| 9 | two-post in box | 3.08 |
+| 10 | Wall mounts as packaged | 2.16 |
# [Model 128G4T1GPU](#tab/sku-b)
The following table lists the dimensions of the shipping package in millimeters
|--||| | 1 | Model 128G4T1GPU | 21.9 | | | | |
-| 2 | Shipping weight, with 4-post mount | 36.2 |
-| 3 | Model 128G4T1GPU install handling, 4-post (without bezel and with inner rails attached) | 21.3 |
+| 2 | Shipping weight, with four-post mount | 36.2 |
+| 3 | Model 128G4T1GPU install handling, four-post (without bezel and with inner rails attached) | 21.3 |
| | | |
-| 4 | Shipping weight, with 2-post mount | 33.0 |
-| 5 | Model 128G4T1GPU install handling, 2-post (without bezel and with inner rails attached) | 21.3 |
+| 4 | Shipping weight, with two-post mount | 33.0 |
+| 5 | Model 128G4T1GPU install handling, two-post (without bezel and with inner rails attached) | 21.3 |
| | | | | 6 | Shipping weight with wall mount | 32.0 | | 7 | Model 128G4T1GPU install handling without bezel | 20.7 | | | | |
-| 8 | 4-post in box | 6.28 |
-| 9 | 2-post in box | 3.08 |
-| 10 | Wallmount as packaged | 2.16 |
+| 8 | four-post in box | 6.28 |
+| 9 | two-post in box | 3.08 |
+| 10 | Wall mounts as packaged | 2.16 |
# [Model 256G6T2GPU](#tab/sku-c)
The following table lists the dimensions of the shipping package in millimeters
|--|--|| | 1 | Model 256G6T2GPU | 22.9 | | | | |
-| 2 | Shipping weight, with 4-post mount | 37.1 |
-| 3 | Model 256G6T2GPU install handling, 4-post (without bezel and with inner rails attached)|22.3 |
+| 2 | Shipping weight, with four-post mount | 37.1 |
+| 3 | Model 256G6T2GPU install handling, four-post (without bezel and with inner rails attached)|22.3 |
| | | |
-| 4 | Shipping weight, with 2-post mount | 33.9 |
-| 5 | Model 256G6T2GPU install handling, 2-post (without bezel and with inner rails attached) | 22.3 |
+| 4 | Shipping weight, with two-post mount | 33.9 |
+| 5 | Model 256G6T2GPU install handling, two-post (without bezel and with inner rails attached) | 22.3 |
| | | | | 6 | Shipping weight with wall mount | 33.0 | | 7 | Model 256G6T2GPU install handling without bezel | 21.7 | | | | |
-| 8 | 4-post in box | 6.28 |
-| 9 | 2-post in box | 3.08 |
-| 10 | Wallmount as packaged | 2.16 |
+| 8 | four-post in box | 6.28 |
+| 9 | two-post in box | 3.08 |
+| 10 | Wall mounts as packaged | 2.16 |
databox-online Azure Stack Edge Pro R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-technical-specifications-compliance.md
Previously updated : 03/02/2023 Last updated : 04/12/2023 # Azure Stack Edge Pro R technical specifications
The Azure Stack Edge Pro R device has the following specifications for compute a
| CPU type | Dual Intel Xeon Silver 4114 CPU | | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | Dell Compatible 16 GB RDIMM, 2666 MT/s, Dual rank |
-| Memory: raw | 256 GB RAM (16 x 16 GB) |
-| Memory: usable | 217 GB RAM |
+| Memory type | Dell Compatible 16 GiB RDIMM, 2666 MT/s, Dual rank |
+| Memory: raw | 256 GiB RAM (16 x 16 GiB) |
+| Memory: usable | 217 GiB RAM |
## Compute acceleration specifications
databox-online Azure Stack Edge Profiles Azure Resource Manager Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-profiles-azure-resource-manager-versions.md
+ Last updated 05/17/2021
The Storage Resource Provider (SRP) lets you manage your storage account and key
## Next steps -- [Manage an Azure Stack Edge Pro GPU device via Windows PowerShell](azure-stack-edge-gpu-connect-powershell-interface.md)
+- [Manage an Azure Stack Edge Pro GPU device via Windows PowerShell](azure-stack-edge-gpu-connect-powershell-interface.md)
databox-online Azure Stack Edge Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-compliance.md
Previously updated : 04/12/2021 Last updated : 04/12/2023 # Azure Stack Edge Pro FPGA technical specifications
The Azure Stack Edge Pro FPGA device has the following specifications for comput
| CPU type | Dual Intel Xeon Silver 4114 2.2 G | | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | 8 x 16 GB RDIMM |
-| Memory: raw | 128 GB RAM (8 x 16 GB) |
-| Memory: usable | 102 GB RAM |
+| Memory type | 8 x 16 GiB RDIMM |
+| Memory: raw | 128 GiB RAM (8 x 16 GiB) |
+| Memory: usable | 102 GiB RAM |
## FPGA specifications
The following table lists the typical power consumption data (actual values may
## Network interface specifications
-Your Azure Stack Edge Pro FPGA device has 6 network interfaces, PORT1- PORT6.
+Your Azure Stack Edge Pro FPGA device has six network interfaces, PORT1- PORT6.
| Specification | Description | |-|-|
-| Network interfaces | 2 X 1 GbE interfaces ΓÇô 1 management, not user configurable, used for initial setup. The other interface is user configurable, can be used for data transfer, and is DHCP by default. <br>2 X 25 GbE interfaces ΓÇô These can also operate as 10 GbE interfaces. These data interfaces can be configured by user as DHCP (default) or static. <br> 2 X 25 GbE interfaces - These data interfaces can be configured by user as DHCP (default) or static. |
+| Network interfaces | 2 X 1 GbE interfaces ΓÇô 1 management, not user configurable, used for initial setup. The other interface is user configurable, can be used for data transfer, and is DHCP by default. <br>2 X 25-GbE interfaces ΓÇô These can also operate as 10-GbE interfaces. These data interfaces can be configured by user as DHCP (default) or static. <br> 2 X 25-GbE interfaces - These data interfaces can be configured by user as DHCP (default) or static. |
The Network Adapters used are:
The Network Adapters used are:
|Network Daughter Card (rNDC) |QLogic FastLinQ 41264 Dual Port 25GbE SFP+, Dual Port 1GbE, rNDC| |PCI Network Adapter |QLogic FastLinQ 41262 zwei Ports 25Gbit/s SFP28 Adapter|
-Please consult the Hardware Compatibility List from Intel QLogic for compatible Gigabit Interface Converter (GBIC). Gigabit Interface Converter (GBIC) are not included in the delivery of Azure Stack Edge.
+Consult the Hardware Compatibility List from Intel QLogic for compatible Gigabit Interface Converter (GBIC). Gigabit Interface Converter (GBIC) aren't included in the delivery of Azure Stack Edge.
## Storage specifications
This section lists the specifications related to the enclosure environment such
| Enclosure | Ambient temperature range | Ambient relative humidity | Maximum dew point | |--|--|--||
-| Operational | 10┬░C - 35┬░C (50┬░F - 86┬░F) | 10% - 80% non-condensing. | 29┬░C (84┬░F) |
-| Non-operational | -40┬░C to 65┬░C (-40┬░F - 149┬░F) | 5% - 95% non-condensing. | 33┬░C (91┬░F) |
+| Operational | 10┬░C - 35┬░C (50┬░F - 86┬░F) | 10% - 80% noncondensing. | 29┬░C (84┬░F) |
+| Non-operational | -40┬░C to 65┬░C (-40┬░F - 149┬░F) | 5% - 95% noncondensing. | 33┬░C (91┬░F) |
### Airflow, altitude, shock, vibration, orientation, safety, and EMC | Enclosure | Operational specifications | |--|| | Airflow | System airflow is front to rear. System must be operated with a low-pressure, rear-exhaust installation. <!--Back pressure created by rack doors and obstacles should not exceed 5 pascals (0.5 mm water gauge).--> |
-| Maximum altitude, operational | 3048 meters (10,000 feet) with maximum operating temperature de-rated determined by [Operating temperature de-rating specifications](#operating-temperature-de-rating-specifications). |
-| Maximum altitude, non-operational | 12,000 meters (39,370 feet) |
-| Shock, operational | 6 G for 11 milliseconds in 6 orientations |
-| Shock, non-operational | 71 G for 2 milliseconds in 6 orientations |
+| Maximum altitude, operational | 3048 meters (10,000 feet) with maximum operating temperature de-rated determined by [Operating temperature derating specifications](#operating-temperature-derating-specifications). |
+| Maximum altitude, nonoperational | 12,000 meters (39,370 feet) |
+| Shock, operational | 6 G for 11 milliseconds in six orientations |
+| Shock, nonoperational | 71 G for 2 milliseconds in six orientations |
| Vibration, operational | 0.26 G<sub>RMS</sub> 5 Hz to 350 Hz random |
-| Vibration, non-operational | 1.88 G<sub>RMS</sub> 10 Hz to 500 Hz for 15 minutes (all six sides tested.) |
+| Vibration, nonoperational | 1.88 G<sub>RMS</sub> 10 Hz to 500 Hz for 15 minutes (all six sides tested.) |
| Orientation and mounting | 19" rack mount | | Safety and approvals | EN 60950-1:2006 +A1:2010 +A2:2013 +A11:2009 +A12:2011/IEC 60950-1:2005 ed2 +A1:2009 +A2:2013 EN 62311:2008 | | EMC | FCC A, ICES-003 <br>EN 55032:2012/CISPR 32:2012 <br>EN 55032:2015/CISPR 32:2015 <br>EN 55024:2010 +A1:2015/CISPR 24:2010 +A1:2015 <br>EN 61000-3-2:2014/IEC 61000-3-2:2014 (Class D) <br>EN 61000-3-3:2013/IEC 61000-3-3:2013 | | Energy | Commission Regulation (EU) No. 617/2013 | | RoHS | EN 50581:2012 |
-### Operating temperature de-rating specifications
+### Operating temperature derating specifications
-| Operating temperature de-rating | Ambient temperature range |
+| Operating temperature derating | Ambient temperature range |
|--|| | Up to 35┬░C (95┬░F) | Maximum temperature is reduced by 1┬░C/300 m (1┬░F/547 ft) above 950 m (3,117 ft). | | 35┬░C to 40┬░C (95┬░F to 104┬░F) | Maximum temperature is reduced by 1┬░C/175 m (1┬░F/319 ft) above 950 m (3,117 ft). |
databox Data Box How To Set Data Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-how-to-set-data-tier.md
+ Last updated 05/24/2019
databox Data Box Troubleshoot Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-troubleshoot-time-sync.md
+ Last updated 11/15/2021
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
-+ Last updated 10/12/2022
ddos-protection Manage Ddos Protection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-cli.md
-+ Last updated 10/12/2022
ddos-protection Manage Ddos Protection Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-template.md
-+ Last updated 10/12/2022
ddos-protection Manage Ddos Protection Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-terraform.md
+
+ Title: 'Quickstart: Create and configure Azure DDoS Network Protection using Terraform'
+description: In this article, you create and configure Azure DDoS Network Protection using Terraform
++++++ Last updated : 4/12/2023++
+# Quickstart: Create and configure Azure DDoS Network Protection using Terraform
+
+This quickstart describes how to use Terraform to create and enable a [distributed denial of service (DDoS) protection plan](ddos-protection-overview.md) and [Azure virtual network (VNet)](/azure/virtual-network/virtual-networks-overview). An Azure DDoS Network Protection plan defines a set of virtual networks that have DDoS protection enabled across subscriptions. You can configure one DDoS protection plan for your organization and link virtual networks from multiple subscriptions to the same plan.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random value for the virtual network name using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure DDoS protection plan using [azurerm_network_ddos_protection_plan](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_ddos_protection_plan)
+> * Create an Azure virtual network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-ddos-protection-plan). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-ddos-protection-plan/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-ddos-protection-plan/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-ddos-protection-plan/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-ddos-protection-plan/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-ddos-protection-plan/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the DDoS protection plan name.
+
+ ```console
+ ddos_protection_plan_name=$(terraform output -raw ddos_protection_plan_name)
+ ```
+
+1. Run [az network ddos-protection show](/cli/azure/network/ddos-protection#az-network-ddos-protection-show) to display information about the new DDoS protection plan.
+
+ ```azurecli
+ az network ddos-protection show \
+ --resource-group $resource_group_name \
+ --name $ddos_protection_plan_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the DDoS protection plan name.
+
+ ```console
+ $ddos_protection_plan_name=$(terraform output -raw ddos_protection_plan_name)
+ ```
+
+1. Run [Get-AzDdosProtectionPlan](/powershell/module/az.network/get-azddosprotectionplan) to display information about the new DDoS protection plan.
+
+ ```azurepowershell
+ Get-AzDdosProtectionPlan -ResourceGroupName $resource_group_name `
+ -Name $ddos_protection_plan_name
+ ```
+
+1. Get the virtual network name.
+
+ ```console
+ $virtual_network_name=$(terraform output -raw virtual_network_name)
+ ```
+
+1. Run [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to display information about the new virtual network.
+
+ ```azurepowershell
+ Get-AzVirtualNetwork -ResourceGroupName $resource_group_name `
+ -Name $virtual_network_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [View and configure DDoS protection telemetry](telemetry.md)
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 04/02/2023 Last updated : 04/13/2023 # Reference list of attack paths and cloud security graph components This article lists the attack paths, connections, and insights used in Defender for Cloud Security Posture Management (CSPM). -- You need to [enable Defender for CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths.
+- You need to [enable Defender CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths.
- What you see in your environment depends on the resources you're protecting, and your customized configuration. Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
|--|--| | Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. | | Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. |
-| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| Internet exposed VM has high severity vulnerabilities and read permission to a data store | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a data store. | | Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and high permission to a subscription | A virtual machine has high severity vulnerabilities and has high permission to a subscription. |
-| VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+| VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy.
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data (Preview) | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data (Preview) | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an AWS Key Management Service (KMS) via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM policy and an AWS KMS policy.| | Internet exposed EC2 instance has high severity vulnerabilities | An AWS EC2 instance is reachable from the internet and has high severity vulnerabilities. | | EC2 instance with high severity vulnerabilities has high privileged permissions to an account | An AWS EC2 instance has high severity vulnerabilities and has permissions to an account. | | EC2 instance with high severity vulnerabilities has read permissions to a data store |An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket via an IAM policy or via a bucket policy, or via both an IAM policy and a bucket policy. |
-| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data (Preview) | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data (Preview) | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| EC2 instance with high severity vulnerabilities has read permissions to a KMS key | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an AWS Key Management Service (KMS) key via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM and AWS KMS policy. | ### Azure data
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Managed database with excessive internet exposure allows basic (local user/password) authentication | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. | | Internet exposed VM has high severity vulnerabilities and a hosted database installed | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution. | Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container (Preview) | An internal Azure storage container replicates its data to another Azure storage container which is reachable from the internet and allows public access, and poses this data at risk. |
-| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible (Preview) | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md).|
+| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible (Preview) | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md).|
| Internet exposed managed database allows basic (local user/password) authentication (Preview) | A database can be accessed through the internet and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. | | Internet exposed database server allows basic (user/password) authentication method (Preview) | Azure SQL database can be accessed through the internet and allows user/password authentication which exposes the DB to brute force attacks. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Attack Path Display Name | Attack Path Description | |--|--|
-| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible (Preview) | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible (Preview) | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). | |Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) | |SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
This section lists all of the cloud security graph components (connections and
|--|--|--| | Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance | | Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |
-| Contains sensitive data (Preview) <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
+| Contains sensitive data (Preview) <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
| Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources |
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
Previously updated : 03/14/2023 Last updated : 04/13/2023
Follow these steps to enable data-aware security posture. Don't forget to review
1. Navigate to **Microsoft Defender for Cloud** > **Environmental settings**. 1. Select the relevant Azure subscription.
-1. For the Defender for CSPM plan, select the **On** status.
+1. For the Defender CSPM plan, select the **On** status.
- If Defender for CSPM is already on, select **Settings** in the Monitoring coverage column of the Defender CSPM plan and make sure that the **Sensitive data discovery** component is set to **On** status.
+ If Defender CSPM is already on, select **Settings** in the Monitoring coverage column of the Defender CSPM plan and make sure that the **Sensitive data discovery** component is set to **On** status.
## Enable in Defender CSPM (AWS)
defender-for-cloud Defender For Storage Configure Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md
You may want only certain users, such as a security admin or a SOC analyst, to h
Logic App based responses are a simple, no-code approach to setting up response. However, the response time is slower than the event-driven code-based approach.
-1. Deploy the [DeleteBlobLogicApp](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fstorageantimalwareprev.blob.core.windows.net%2Fworkflows%2FDeleteBlobLogicApp-template.json****) Azure Resource Manager (ARM) template using the Azure portal.
+1. Deploy the [DeleteBlobLogicApp](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fstorageantimalwareprev.blob.core.windows.net%2Fworkflows%2FDeleteBlobLogicApp-template.json) Azure Resource Manager (ARM) template using the Azure portal.
1. Add role assignment to the Logic App to allow it to delete blobs from your storage account: 1. Go to **Identity** in the side menu and select on **Azure role assignments**.
defender-for-cloud Episode Twenty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty.md
Title: Cloud security explorer and attack path analysis | Defender for Cloud in
description: Learn about cloud security explorer and attack path analysis. Previously updated : 01/24/2023 Last updated : 04/13/2023 # Cloud security explorer and attack path analysis | Defender for Cloud in the Field
-**Episode description**: In this episode of Defender for Cloud in the Field, Tal Rosler joins Yuri Diogenes to talk about cloud security explorer and attack path analysis, two new capabilities in Defender for CSPM that were released at Ignite. The talk explains the rationale behind creating these features and how to use these features to prioritize what is more important to keep your environment more secure. Tal also demonstrates how to use these capabilities to quickly identify vulnerabilities and misconfigurations in cloud workloads.
+**Episode description**: In this episode of Defender for Cloud in the Field, Tal Rosler joins Yuri Diogenes to talk about cloud security explorer and attack path analysis, two new capabilities in Defender CSPM that were released at Ignite. The talk explains the rationale behind creating these features and how to use these features to prioritize what is more important to keep your environment more secure. Tal also demonstrates how to use these capabilities to quickly identify vulnerabilities and misconfigurations in cloud workloads.
<br> <br> <iframe src="https://aka.ms/docs/player?id=ce442350-7fab-40c0-b934-d93027b00853" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
defender-for-cloud How To Manage Cloud Security Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md
description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 03/05/2023 Last updated : 04/13/2023 # Build queries with cloud security explorer
Learn more about [the cloud security graph, attack path analysis, and the cloud
- You must [enable agentless scanning](enable-vulnerability-assessment-agentless.md). -- You must [enable Defender for CSPM](enable-enhanced-security.md).
+- You must [enable Defender CSPM](enable-enhanced-security.md).
- You must [enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers.
defender-for-cloud Powershell Sample Vulnerability Assessment Baselines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-baselines.md
Last updated 11/29/2022-+ # Set up baselines for vulnerability assessments on Azure SQL databases
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
Title: Create a security automation for specific security alerts by using an Azure Resource Manager template (ARM template) or Bicep description: Learn how to create a Microsoft Defender for Cloud automation to trigger a logic app, which will be triggered by specific Defender for Cloud alerts by using an Azure Resource Manager template (ARM template) or Bicep. -+ Last updated 01/09/2023
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/03/2023 Last updated : 04/13/2023 # What's new in Microsoft Defender for Cloud?
Updates in March include:
- [A new Defender for Storage plan is available, including near-real time malware scanning and sensitive data threat detection](#a-new-defender-for-storage-plan-is-available-including-near-real-time-malware-scanning-and-sensitive-data-threat-detection) - [Data-aware security posture (preview)](#data-aware-security-posture-preview)-- [New experience for managing the Azure default security policy](#improved-experience-for-managing-the-default-azure-security-policies)
+- [Improved experience for managing the default Azure security policies](#improved-experience-for-managing-the-default-azure-security-policies)
- [Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)](#defender-cspm-cloud-security-posture-management-is-now-generally-available-ga) - [Option to create custom recommendations and security standards in Microsoft Defender for Cloud](#option-to-create-custom-recommendations-and-security-standards-in-microsoft-defender-for-cloud) - [Microsoft cloud security benchmark (MCSB) version 1.0 is now Generally Available (GA)](#microsoft-cloud-security-benchmark-mcsb-version-10-is-now-generally-available-ga)
Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com
### Defender CSPM (Cloud Security Posture Management) is now Generally Available (GA)
-We are announcing that Defender for CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
+We are announcing that Defender CSPM is now Generally Available (GA). Defender CSPM offers all of the services available under the Foundational CSPM capabilities and adds the following benefits:
- **Attack path analysis and ARG API** - Attack path analysis uses a graph-based algorithm that scans the cloud security graph to expose attack paths and suggests recommendations as to how best remediate issues that will break the attack path and prevent successful breach. You can also consume attack paths programmatically by querying Azure Resource Graph (ARG) API. Learn how to use [attack path analysis](how-to-manage-attack-path.md) - **Cloud Security explorer** - Use the Cloud Security Explorer to run graph-based queries on the cloud security graph, to proactively identify security risks in your multicloud environments. Learn more about [cloud security explorer](concept-attack-path.md#what-is-cloud-security-explorer).
-Learn more about [Defender for CSPM](overview-page.md).
+Learn more about [Defender CSPM](overview-page.md).
### Option to create custom recommendations and security standards in Microsoft Defender for Cloud
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
Title: SQL information protection policy in Microsoft Defender for Cloud description: Learn how to customize information protection policies in Microsoft Defender for Cloud. + Last updated 11/09/2021
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
description: Learn about the Azure cloud environments where Defender for Cloud c
Previously updated : 03/08/2023 Last updated : 04/13/2023 # Support matrices for Defender for Cloud
Microsoft Defender for Cloud is available in the following Azure cloud environme
| **Microsoft Defender plans and extensions** | | | | | - [Microsoft Defender for Servers](./defender-for-servers-introduction.md) | GA | GA | GA | | - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
-| - [Microsoft Defender for CSPM](./concept-cloud-security-posture-management.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender CSPM](./concept-cloud-security-posture-management.md) | GA | Not Available | Not Available |
| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA | GA | | - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[7](#footnote7)</sup> | GA | GA | GA |
defender-for-iot Concept Agent Portfolio Overview Os Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-portfolio-overview-os-support.md
Most of the Linux Operating Systems (OS) are covered by both agents. The agents
| Ubuntu 20.04 | Γ£ô | Γ£ô | Γ£ô | | Ubuntu 22.04 | Γ£ô | | |
-Defender Micro agent also supports Yocto as an open source.
+The Defender for IoT micro agent also supports Yocto as an open source.
For additional information on supported operating systems, or to request access to the source code so you can incorporate it as a part of the device's firmware, contact your account manager, or send an email to <defender_micro_agent@microsoft.com>.
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
You're billed based on the number of committed devices associated with each subs
[!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
-[Configure Windows Endpoint monitoring](configure-windows-endpoint-monitoring.md)
-[Configure DNS servers for reverse lookup resolution for OT monitoring](configure-reverse-dns-lookup.md)
+### Device coverage warning
+
+If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, a warning message will appear in Defender for IoT in the Azure portal. For example:
++
+This message indicates that you need to update the number of committed devices on the relevant subscription to match the actual number of devices being monitored.
+
+**To update the number of committed devices**:
+
+1. In the warning message, select **Get more device coverage**, which will open the pane to edit your plan for the relevant subscription.
+
+1. In the **Number of devices** field, update the number of committed devices to the actual number of devices being monitored by Defender for IoT for this subscription.
+
+ For example:
+
+ :::image type="content" source="media/billing/update-number-of-devices.png" alt-text="Screenshot of updating the number of committed devices on a subscription when there is a device coverage warning." lightbox="media/billing/update-number-of-devices.png":::
+
+1. Select **Next**.
+
+1. Select the **I accept the terms and conditions** option, and then select **Purchase**. Billing changes will be updated accordingly.
+
+> [!NOTE]
+> This warning is a reminder for you to update the number of committed devices for your subscription, and does not affect Defender for IoT functionality.
## Billing cycles and changes in your plans
defender-for-iot Faqs Ot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-ot.md
You can also use our [UI and CLI tools](how-to-troubleshoot-sensor.md#check-syst
For more information, see [Troubleshoot the sensor](how-to-troubleshoot-sensor.md) and [Troubleshoot the on-premises management console](how-to-troubleshoot-on-premises-management-console.md).
+## I am seeing a warning that we have exceeded the maximum number of devices for the subscription. How do I resolve this?
+
+If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, a warning message will appear in Defender for IoT in the Azure portal, and you will need to update the number of committed devices on the relevant subscription. For more information, see [Defender for IoT committed devices](billing.md#defender-for-iot-committed-devices).
+ ## Next steps - [Tutorial: Get started with Microsoft Defender for IoT for OT security](tutorial-onboarding.md)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your new plan is listed under the relevant subscription in the **Plans** grid.
Edit your Defender for IoT plans for OT networks if you need change your plan commitment or update the number of committed devices or sites.
-For example, you may have more devices that require monitoring if you're increasing existing site coverage, have discovered more devices than expected, or there are network changes such as adding switches.
+For example, you may have more devices that require monitoring if you're increasing existing site coverage, or there are network changes such as adding switches. If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message reminding you to update the number of committed devices on the relevant subscription.
**To edit a plan:**
For example, you may have more devices that require monitoring if you're increas
- Update the number of [committed devices](#calculate-committed-devices-for-ot-monitoring) - Update the number of sites (annual commitments only)
-1. Select the **I accept the terms** option, and then select **Purchase**.
+1. Select the **I accept the terms and conditions** option, and then select **Purchase**.
1. After any changes are made, make sure to reactivate your sensors. For more information, see [Reactivate an OT sensor](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor).
deployment-environments How To Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-access-environments.md
description: Learn how to create and access an environment in an Azure Deploymen
-+ Last updated 03/14/2023
deployment-environments How To Manage Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-manage-environments.md
description: Learn how to manage your Azure Deployment Environments deployment e
+ Last updated 02/28/2023
dev-box How To Install Dev Box Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-install-dev-box-cli.md
description: Learn how to install the Azure CLI and the Microsoft Dev Box CLI extension so you can create Dev Box resources from the command line. +
You might find the following commands useful as you work with the Dev Box CLI ex
## Next steps
-For complete command listings, refer to the [Microsoft Dev Box and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
+For complete command listings, refer to the [Microsoft Dev Box and Azure Deployment Environments Azure CLI documentation](https://aka.ms/CLI-reference).
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-repository.md
Last updated 01/11/2022-+ # Add an artifact repository to a lab
Set-AzContext -SubscriptionId <Your Azure subscription ID>
## Next steps - [Specify mandatory artifacts for DevTest Labs VMs](devtest-lab-mandatory-artifacts.md)-- [Diagnose artifact failures in the lab](devtest-lab-troubleshoot-artifact-failure.md)
+- [Diagnose artifact failures in the lab](devtest-lab-troubleshoot-artifact-failure.md)
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/automate-add-lab-user.md
Last updated 06/26/2020 -+ # Automate adding a lab user to a lab in Azure DevTest Labs
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-shared-image-gallery.md
Title: Configure a shared image gallery description: Learn how to configure a shared image gallery in Azure DevTest Labs, which enables users to access images from a shared location while creating lab resources. + Last updated 06/26/2020
devtest-labs Create Lab Windows Vm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-bicep.md
description: Use Bicep to create a lab that has a virtual machine in Azure DevTe
-+ Last updated 03/22/2022
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/create-lab-windows-vm-template.md
description: Use an Azure Resource Manager (ARM) template to create a lab that h
-+ Last updated 01/03/2022
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/deploy-nested-template-environments.md
Title: Deploy nested ARM template environments description: Learn how to nest Azure Resource Manager (ARM) templates to deploy Azure DevTest Labs environments. + Last updated 01/26/2022
The following example shows the main *azuredeploy.json* ARM template file for th
- For more information about DevTest Labs environments, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md). - For more information about using the Visual Studio Azure Resource Group project template, including code samples, see [Creating and deploying Azure resource groups through Visual Studio](../azure-resource-manager/templates/create-visual-studio-deployment-project.md).-
devtest-labs Devtest Lab Announcements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-announcements.md
Title: Post an announcement to a lab description: Learn how to post a custom announcement in an existing lab to notify users about recent changes or additions to the lab in Azure DevTest Labs. + Last updated 06/26/2020
devtest-labs Devtest Lab Store Secrets In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-store-secrets-in-key-vault.md
Title: Store secrets in a key vault description: Learn how to store secrets in an Azure Key Vault and use them while creating a VM, formula, or an environment. + Last updated 06/26/2020
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Title: Create and deploy labs with Azure Resource Manager (ARM) templates description: Learn how Azure DevTest Labs uses Azure Resource Manager (ARM) templates to create and configure lab virtual machines (VMs) and environments. + Last updated 01/11/2022
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/resource-group-control.md
Title: Specify resource group for Azure VMs in DevTest Labs description: Learn how to specify a resource group for VMs in a lab in Azure DevTest Labs. + Last updated 10/18/2021
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples description: Learn about Azure CLI scripts. With these samples, you can create a virtual machine and then start, stop, and delete it in Azure DevTest Labs. + Last updated 02/02/2022
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/samples-powershell.md
Title: Azure PowerShell Samples description: Learn about Azure PowerShell scripts. These samples help you manage labs in Azure Lab Services. + Last updated 02/02/2022
devtest How To Manage Monitor Devtest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest/offer/how-to-manage-monitor-devtest.md
ms.prod: visual-studio-windows Last updated 10/12/2021-+ # Managing Azure DevTest Subscriptions
digital-twins How To Create Data History Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-data-history-connection.md
-# Mandatory fields.
Title: Create a data history connection description: See how to set up a data history connection for historizing Azure Digital Twins updates into Azure Data Explorer.
Last updated 03/28/2023 -+ # Optional fields. Don't forget to remove # if you need a field. #
-#
#
digital-twins How To Create Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-endpoints.md
-# Mandatory fields.
Title: Create endpoints description: Learn how to set up endpoints for Azure Digital Twins data
Last updated 02/08/2023 + # Optional fields. Don't forget to remove # if you need a field. #
-#
#
Here's an example of a dead-letter message for a [twin create notification](conc
## Next steps
-To actually send data from Azure Digital Twins to an endpoint, you'll need to define an [event route](concepts-route-events.md). These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Continue on to create an event route to your endpoint in [Create routes and filters](how-to-create-routes.md).
+To actually send data from Azure Digital Twins to an endpoint, you'll need to define an [event route](concepts-route-events.md). These routes let developers wire up event flow, throughout the system and to downstream services. A single route can allow multiple notifications and event types to be selected. Continue on to create an event route to your endpoint in [Create routes and filters](how-to-create-routes.md).
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
Last updated 03/21/2022 -
- - subject-armqs
- - mode-arm
+ # Quickstart: Create instance of Azure Database Migration Service using Bicep
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Last updated 06/29/2020 -
- - subject-armqs
- - mode-arm
+ # Quickstart: Create instance of Azure Database Migration Service using ARM template
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Last updated 04/26/2022 +
If you receive the error *"The subscription isn't registered to use namespace 'M
- For Azure PowerShell reference documentation for SQL Server database migrations, see [Az.DataMigration](/powershell/module/az.datamigration). - For Azure CLI reference documentation for SQL Server database migrations, see [az datamigration](/cli/azure/datamigration).-- For Azure Samples code repository, see [data-migration-sql](https://github.com/Azure-Samples/data-migration-sql)
+- For Azure Samples code repository, see [data-migration-sql](https://github.com/Azure-Samples/data-migration-sql)
dns Dns Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-bicep.md
Last updated 09/27/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using Bicep so I can use Azure DNS for my name resolution.
dns Dns Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-get-started-template.md
Last updated 09/27/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using Azure ARM template so I can use Azure DNS for my name resolution.
dns Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-import-export.md
description: Learn how to import and export a DNS (Domain Name System) zone file
+ Last updated 09/27/2022
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
Last updated 10/07/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using Bicep so I can use Azure DNS Private Resolver as forwarder.
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
Last updated 10/07/2022 -+ #Customer intent: As an administrator or developer, I want to learn how to create Azure DNS Private Resolver using ARM template so I can use Azure DNS Private Resolver as forwarder..
dns Private Dns Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-import-export.md
description: Learn how to import and export a DNS zone file to Azure private DN
+ Last updated 09/27/2022
energy-data-services How To Manage Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-audit-logs.md
+
+ Title: How to manage audit logs for Microsoft Azure Data Manager for Energy Preview
+description: Learn how to use audit logs on Azure Data Manager for Energy Preview
++++ Last updated : 04/11/2023+
+#Customer intent: As a developer, I want to use audit logs to check audit trail for data plane APIs for Azure Data Manager for Energy Preview.
+++
+# Manage audit logs
+Audit logs provide auditing trails for data plane APIs on Azure Data Manager for Energy. With audit logs, you can tell:
+* Who performed an action
+* What was the action
+* When was the action performed
+* Status of the action
+
+For example, when you ΓÇ£Add a new memberΓÇ¥ to the ```users.datalake.admins``` entitlement group using entitlements API, you're able to see this information in audit logs.
+
+[![Screenshot of audit logs for entitlement](media/how-to-manage-audit-logs/how-to-manage-audit-logs-4-entilements.png)](media/how-to-manage-audit-logs/how-to-manage-audit-logs-4-entilements.png#lightbox)
+
+## Enable audit logs
+To enable audit logs in diagnostic logging, select your Azure Data Manager for Energy instance in the Azure portal.
+* Select the **Activity log** screen, and then select **Diagnostic settings**.
+* Select **+ Add diagnostic setting**.
+* Enter the Diagnostic settings name.
+* Select **Audit Events** as the Category.
+
+[![Screenshot of audit events option in diagnostic settings](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs.png)](media/how-to-manage-audit-logs/how-to-manage-audit-logs-1-audit-event-diagnostic-logs.png#lightbox)
+
+* Select appropriate Destination details for accessing the diagnostic logs.
+
+> [!NOTE]
+> It might take up to 15 minutes for the first Logs to show in Log Analytics.
+For information on how to work with diagnostic logs, see [Azure Resource Log documentation.](../azure-monitor/essentials/platform-logs-overview.md)
+
+## Audit log details
+The audit logs for Azure Data Manager for Energy service returns the following fields.
+
+|Field Name| Type| Description|
+|-|-|-|
+| TenantID | String | The tenant of your Azure Data Manager for Energy instance.|
+| TimeGenerated | UTC format | The time of the audit log. |
+| Category | String | The diagnostic settings category to which the logs belong.|
+| Location | string | Location of the Azure Data Manager for Energy resource. |
+| ServiceName | String | Name of OSDU service running in Azure Data Manager for Energy. For example: Partition, Search, Indexer, Legal, Entitlements, Workflow, Register, Unit, Crs-catalog, File, Schema, and Dataset |
+| OperationName | String |Operation ID or operation name associated to data plane APIs, which emits audit logs for example "Add member" |
+| Data partition ID | String | Data partition ID on which operation is performed. |
+| Action | String | Action refers to the type of operation that is, whether it's create, delete, update etc.|
+| ActionID | String | ID associated with operation. |
+| PuID | String | ObjectId of the user in Azure AD|
+| ResultType | String | Define success or failure of operation |
+| Operation Description | String | Provides specific details of the response. These details can include tracing information, such as the symptoms, of the result that are used for further analysis. |
+| RequestId | String | This is the unique ID associated to the request, which triggered the operation on data plane. |
+| Message | String | Provides message associated with the success or failure of the operation.|
+| ResourceID | String | The Azure Data Manager for Energy resource ID of the customer under which the audit log belongs. |
+
+## Sample queries
+
+Basic Application Insights queries you can use to explore your log data.
+
+1. Run the following query to group operations by ServiceName:
+
+```sql
+OEPAuditLogs
+| summarize count() by ServiceName
+```
+
+[![Screenshot of key vault, key, user assigned identity, and CMK on encryption tab](media/how-to-manage-audit-logs/how-to-manage-audit-logs-3-allservices.png)](media/how-to-manage-audit-logs/how-to-manage-audit-logs-3-allservices.png#lightbox)
+
+2. Run the following query to see the 100 most recent logs:
+
+```sql
+OEPAuditLogs
+| limit 100
+```
+
+3. Run the following query to get all the failed results:
+
+```sql
+OEPAuditLogs
+| where ResultType contains "Failure"
+```
++
+## Next steps
+
+Learn about Managed Identity:
+> [!div class="nextstepaction"]
+> [Managed Identity in Azure Data Manager for Energy Preview](how-to-use-managed-identity.md)
++
event-grid Blob Event Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-bicep.md
Title: Send Blob storage events to web endpoint - Bicep
description: Use Azure Event Grid and a Bicep file to create Blob storage account, subscribe to events, and send events to a Webhook. Last updated 07/13/2022 + # Quickstart: Route Blob storage events to web endpoint by using Bicep
event-grid Blob Event Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-template.md
Title: 'Send Blob storage events to web endpoint - template'
description: Use Azure Event Grid and an Azure Resource Manager template to create Blob storage account, and subscribe its events. Send the events to a Webhook.' Last updated 09/28/2021 -+ # Quickstart: Route Blob storage events to web endpoint by using an ARM template
event-grid Create View Manage System Topics Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-system-topics-arm.md
Title: Use Azure Resource Manager templates to create system topics in Azure Event Grid description: This article shows how to use Azure Resource Manager templates to create system topics in Azure Event Grid. + Last updated 07/22/2021
event-grid Create View Manage System Topics Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-view-manage-system-topics-cli.md
Title: Create, view, and manage Azure Event Grid system topics using CLI description: This article shows how to use Azure CLI to create, view, and delete system topics. + Last updated 07/22/2021
To delete a system topic, use the following command:
``` ## Next steps
-See the [System topics in Azure Event Grid](system-topics.md) section to learn more about system topics and topic types supported by Azure Event Grid.
+See the [System topics in Azure Event Grid](system-topics.md) section to learn more about system topics and topic types supported by Azure Event Grid.
event-grid Custom Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-topics.md
Title: Custom topics in Azure Event Grid description: Describes custom topics in Azure Event Grid. + Last updated 03/10/2023
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md
Title: Azure Event Grid - Set custom headers on delivered events description: Describes how you can set custom headers (or delivery properties) on delivered events. + Last updated 02/21/2023
event-grid Enable Diagnostic Logs Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-diagnostic-logs-topic.md
Title: Azure Event Grid - Enable diagnostic logs for Event Grid resources description: This article provides step-by-step instructions on how to enable diagnostic logs for Event Grid resources. + Last updated 11/11/2021
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
Title: Event filtering for Azure Event Grid description: Describes how to filter events when creating an Azure Event Grid subscription. + Last updated 09/09/2022
event-grid Install K8s Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/install-k8s-extension.md
description: This article provides steps to install Event Grid on Azure Arc-enab
+ Last updated 03/24/2022
event-grid Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/powershell-samples.md
Title: Azure PowerShell samples - Event Grid | Microsoft Docs description: This article includes a table with links to Azure PowerShell scripting samples for Azure Event Grid. + Last updated 09/15/2021
The following table includes links to Azure PowerShell samples for Event Grid.
## Event Grid topics - [Create custom topic](scripts/powershell-create-custom-topic.md) - Creates an Event Grid custom topic, and returns the endpoint and key. -
event-grid Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/template-samples.md
Title: Azure Resource Manager template samples - Event Grid | Microsoft Docs description: This article provides a list of Azure Resource Manager template samples for Azure Event Grid on GitHub. + Last updated 09/28/2021
event-hubs Event Hubs Auto Inflate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-auto-inflate.md
Title: Automatically scale up throughput units in Azure Event Hubs description: Enable Auto-inflate on a namespace to automatically scale up throughput units (standard tier). + Last updated 06/13/2022
event-hubs Event Hubs Bicep Namespace Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-bicep-namespace-event-hub.md
description: 'Quickstart: Create an Event Hubs namespace with an event hub and a
-+ Last updated 03/22/2022
event-hubs Event Hubs Get Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-get-connection-string.md
Title: Get connection string - Azure Event Hubs | Microsoft Docs description: This article provides instructions for getting a connection string that clients can use to connect to Azure Event Hubs. + Last updated 06/21/2022
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
Title: Azure Event Hubs Firewall Rules | Microsoft Docs description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. + Last updated 02/15/2023
event-hubs Event Hubs Resource Manager Namespace Event Hub Enable Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub-enable-capture.md
Title: Create an event hub with capture enabled - Azure Event Hubs | Microsoft D
description: Create an Azure Event Hubs namespace with one event hub and enable Capture using Azure Resource Manager template Last updated 08/26/2022-+ ms.devlang: azurecli
event-hubs Event Hubs Resource Manager Namespace Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-resource-manager-namespace-event-hub.md
Title: 'Quickstart: Create an Azure event hub with consumer group' description: 'Quickstart: Create an Event Hubs namespace with an event hub and a consumer group using Azure Resource Manager templates' -+ Last updated 06/08/2021
event-hubs Event Hubs Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-samples.md
Title: Samples - Azure Event Hubs | Microsoft Docs description: This article provides a list of samples for Azure Event Hubs that are on GitHub. + Last updated 03/21/2023 - # Git repositories with samples for Azure Event Hubs
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md
Title: Virtual Network service endpoints - Azure Event Hubs | Microsoft Docs description: This article provides information on how to add a Microsoft.EventHub service endpoint to a virtual network. + Last updated 02/15/2023
event-hubs Resource Manager Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-manager-exceptions.md
Title: Azure Event Hubs - Resource Manager exceptions | Microsoft Docs description: List of Azure Event Hubs exceptions surfaced by Azure Resource Manager and suggested actions. + Last updated 05/10/2021
The following sections provide various exceptions/errors that are surfaced throu
| Error code | Error subcode | Error message | Description | Recommendation | | - | - | - | -- | -- |
-| Internal Server Error | none | Internal Server Error. | The Event Hubs service had an internal error. | Retry the failing operation. If the operation continues to fail, contact support. |
+| Internal Server Error | none | Internal Server Error. | The Event Hubs service had an internal error. | Retry the failing operation. If the operation continues to fail, contact support. |
expressroute Expressroute Howto Circuit Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-resource-manager-template.md
Last updated 11/13/2019 --+ # Create an ExpressRoute circuit by using Azure Resource Manager template
expressroute Expressroute Howto Reset Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-reset-peering.md
+ Last updated 12/15/2020
expressroute Expressroute Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-move.md
+ Last updated 06/02/2021 - # Moving ExpressRoute circuits from the classic to the Resource Manager deployment model This article provides an overview of what happens when you move an Azure ExpressRoute circuit from the classic to the Azure Resource Manager deployment model.
Follow the instructions that are described in [Move an ExpressRoute circuit from
* [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) * [Configure routing](expressroute-howto-routing-arm.md) * [Link a virtual network to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)-
expressroute How To Move Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-move-peering.md
Previously updated : 04/28/2021 Last updated : 04/07/2023 - # Move a public peering to Microsoft peering This article helps you move a public peering configuration to Microsoft peering with no downtime. ExpressRoute supports using Microsoft peering with route filters for Azure PaaS services, such as Azure storage and Azure SQL Database. You now need only one routing domain to access Microsoft PaaS and SaaS services. You can use route filters to selectively advertise the PaaS service prefixes for Azure regions you want to consume.
-Azure public peering has 1 NAT IP address associated to each BGP session. Microsoft peering allows you to configure your own NAT allocations, as well as use route filters for selective prefix advertisements. Public Peering is a unidirectional service using which Connectivity is always initiated from your WAN to Microsoft Azure services. Microsoft Azure services will not be able to initiate connections into your network through this routing domain.
+> [!IMPORTANT]
+> Public peering for ExpressRoute is being retired on **March 31, 2024**. For more information, see [**retirement notice**](https://azure.microsoft.com/updates/retirement-notice-migrate-from-public-peering-by-march-31-2024/).
+
+Azure public peering has one NAT IP address associated to each BGP session. Microsoft peering allows you to configure your own NAT allocations, and use route filters for selective prefix advertisements. Public Peering is a unidirectional service using which Connectivity is always initiated from your WAN to Microsoft Azure services. Microsoft Azure services can't initiate connections into your network through this routing domain.
+
+## Peering comparison
+
+| Aspect | Public peering | Microsoft peering |
+| | | |
+| Number of NAT IP addresses | 1 (not scalable) | Per scale*** |
+| Call initiation direction | Unidirectional: on-premises to Microsoft | Bidirectional |
+| Prefix advertisement | Nonselectable | Advertisement of Microsoft prefixes controlled by route filters |
+| Support | No new public peering deployments. Public peering will be retired on March 31, 2024. | Fully supported |
+
+*** BYOIP: you can scale the number of NAT IP addresses assigned depending on your call volume. To get NAT IP addresses, work with your service provider.
-Once public peering is enabled, you can connect to all Azure services. We do not allow you to selectively pick services for which we advertise routes to. While Microsoft peering is a bi-directional connectivity where connection can be initiated from Microsoft Azure service along with your WAN. For more information about routing domains and peering, see [ExpressRoute circuits and routing domains](expressroute-circuit-peerings.md).
+Once public peering is enabled, you can connect to all Azure services. We don't allow you to selectively pick services for which we advertise routes to. While Microsoft peering is a bi-directional connectivity where connection can be initiated from Microsoft Azure service along with your WAN. For more information about routing domains and peering, see [ExpressRoute circuits and routing domains](expressroute-circuit-peerings.md).
## <a name="before"></a>Before you begin
-To connect to Microsoft peering, you need to set up and manage NAT. Your connectivity provider may set up and manage the NAT as a managed service. If you are planning to access the Azure PaaS and Azure SaaS services on Microsoft peering, it's important to size the NAT IP pool correctly. For more information about NAT for ExpressRoute, see the [NAT requirements for Microsoft peering](expressroute-nat.md#nat-requirements-for-microsoft-peering). When you connect to Microsoft through Azure ExpressRoute(Microsoft peering), you have multiple links to Microsoft. One link is your existing Internet connection, and the other is via ExpressRoute. Some traffic to Microsoft might go through the Internet but come back via ExpressRoute, or vice versa.
+To connect to Microsoft peering, you need to set up and manage NAT. Your connectivity provider may set up and manage the NAT as a managed service. If you're planning to access the Azure PaaS and Azure SaaS services on Microsoft peering, it's important to size the NAT IP pool correctly. For more information about NAT for ExpressRoute, see the [NAT requirements for Microsoft peering](expressroute-nat.md#nat-requirements-for-microsoft-peering). When you connect to Microsoft through Azure ExpressRoute(Microsoft peering), you have multiple links to Microsoft. One link is your existing Internet connection, and the other is via ExpressRoute. Some traffic to Microsoft might go through the Internet but come back via ExpressRoute, or vice versa.
![Bidirectional connectivity](./media/how-to-move-peering/bidirectional-connectivity.jpg)
To connect to Microsoft peering, you need to set up and manage NAT. Your connect
Refer to [Asymmetric routing with multiple network paths](./expressroute-asymmetric-routing.md) for caveats of asymmetric routing before configuring Microsoft peering.
-* If you are using public peering and currently have IP Network rules for public IP addresses that are used to access [Azure Storage](../storage/common/storage-network-security.md) or [Azure SQL Database](/azure/azure-sql/database/vnet-service-endpoint-rule-overview), you need to make sure that the NAT IP pool configured with Microsoft peering is included in the list of public IP addresses for the Azure storage account or Azure SQL account.
-* Note that legacy Public peering makes use of Source Network Address Translation (SNAT) to a Microsoft-registered public IP, while Microsoft peering does not.
-* In order to move to Microsoft peering with no downtime, use the steps in this article in the order that they are presented.
+* If you're using public peering and currently have IP Network rules for public IP addresses that are used to access [Azure Storage](../storage/common/storage-network-security.md) or [Azure SQL Database](/azure/azure-sql/database/vnet-service-endpoint-rule-overview), you need to make sure that the NAT IP pool configured with Microsoft peering gets included in the list of public IP addresses for the Azure storage account or the Azure SQL account.
+* Legacy Public peering makes use of Source Network Address Translation (SNAT) to a Microsoft-registered public IP, while Microsoft peering doesn't.
+* In order to move to Microsoft peering with no downtime, use the steps in this article in the order that they're presented.
## <a name="create"></a>1. Create Microsoft peering
-If Microsoft peering has not been created, use any of the following articles to create Microsoft peering. If your connectivity provider offers managed layer 3 services, you can ask the connectivity provider to enable Microsoft peering for your circuit.
+If Microsoft peering hasn't been created, use any of the following articles to create Microsoft peering. If your connectivity provider offers managed layer 3 services, you can ask the connectivity provider to enable Microsoft peering for your circuit.
-If the layer 3 is managed by you the following information is required before you proceed:
+If you manage layer 3, the following information is required before you can proceed:
-* A /30 subnet for the primary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet you will assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
-* A /30 subnet for the secondary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet you will assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
+* A /30 subnet for the primary link. The prefix must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet, you assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
+* A /30 subnet for the secondary link. The prefix must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR. From this subnet, you assign the first useable IP address to your router as Microsoft uses the second useable IP for its router.<br>
* A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID. For both Primary and Secondary links you must use the same VLAN ID.<br> * AS number for peering. You can use both 2-byte and 4-byte AS numbers.<br> * Advertised prefixes: You must provide a list of all prefixes you plan to advertise over the BGP session. Only public IP address prefixes are accepted. If you plan to send a set of prefixes, you can send a comma-separated list. These prefixes must be registered to you in an RIR / IRR.<br> * Routing Registry Name: You can specify the RIR / IRR against which the AS number and prefixes are registered.
-* **Optional** - Customer ASN: If you are advertising prefixes that are not registered to the peering AS number, you can specify the AS number to which they are registered.<br>
+* **Optional** - Customer ASN: If you're advertising prefixes not registered to a peering AS number, you can specify the registered AS number to which t registered.<br>
* **Optional** - An MD5 hash if you choose to use one. Detailed instructions to enable Microsoft peering can be found in the following articles:
Verify that the Microsoft peering is enabled and the advertised public prefixes
## <a name="routefilter"></a>3. Configure and attach a route filter to the circuit
-By default, new Microsoft peering do not advertise any prefixes until a route filter is attached to the circuit. When you create a route filter rule, you can specify the list of service communities for Azure regions that you want to consume for Azure PaaS services. This provides you the flexibility to filter the routes as per your requirement, as shown in the following screenshot:
+By default, new Microsoft peering don't advertise any prefixes until a route filter is attached to the circuit. When you create a route filter rule, you can specify the list of service communities for Azure regions that you want to consume for Azure PaaS services. This feature provides you with the flexibility to filter the routes as per your requirement, as shown in the following screenshot:
![Merge public peering](./media/how-to-move-peering/routefilter.jpg)
Configure route filters using any of the following articles:
## <a name="delete"></a>4. Delete the public peering
-After verifying that the Microsoft peering is configured and the prefixes you wish to consume are correctly advertised on Microsoft peering, you can then delete the public peering. To delete the public peering, use any of the following articles:
+After verifying Microsoft peering is configured and the prefixes you want to use are correctly advertised through Microsoft peering, you can then delete the public peering. To delete public peering, you can use Azure PowerShell or Azure CLI. For more information, see the following articles:
* [Delete Azure public peering using Azure PowerShell](about-public-peering.md#powershell) * [Delete Azure public peering using CLI](about-public-peering.md#cli)
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
Last updated 03/24/2022 -+ # Quickstart: Create an ExpressRoute circuit with private peering using Bicep
expressroute Quickstart Create Expressroute Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-template.md
Last updated 10/12/2020 -+ # Quickstart: Create an ExpressRoute circuit with private peering using an ARM template
firewall-manager Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/ip-groups.md
description: IP groups allow you to group and manage IP addresses for Azure Fire
+ Last updated 01/10/2023
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
Last updated 07/05/2022 -+ # Quickstart: Create an Azure Firewall and a firewall policy - Bicep
firewall-manager Quick Firewall Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy.md
Last updated 02/17/2021 -+ # Quickstart: Create an Azure Firewall and a firewall policy - ARM template
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
Last updated 06/28/2022 -+ # Quickstart: Secure your virtual hub using Azure Firewall Manager - Bicep
firewall-manager Quick Secure Virtual Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub.md
Last updated 08/28/2020 -+ # Quickstart: Secure your virtual hub using Azure Firewall Manager - ARM template
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
-+ Last updated 06/28/2022
firewall Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-template.md
-+ Last updated 05/10/2021
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
description: In this article, you learn how to enable and manage Azure Firewall
+ Last updated 11/15/2022
firewall Ftp Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ftp-support.md
description: By default, Active FTP is disabled on Azure Firewall. You can enabl
+ Last updated 09/23/2021
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
description: IP groups allow you to group and manage IP addresses for Azure Fire
+ Last updated 01/10/2023
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
See [virtual network route table documentation](../virtual-network/virtual-netwo
Below are three network rules you can use to configure on your firewall, you may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
-Finally, we'll add a third network rule opening port 123 to `ntp.ubuntu.com` FQDN via UDP (adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you'll need to adapt it when using your own options).
+
+Finally, we'll add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you'll need to adapt it when using your own options.
After setting the network rules, we'll also add an application rule using the `AzureKubernetesService` that covers all needed FQDNs accessible through TCP port 443 and port 80.
firewall Quick Create Ipgroup Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-bicep.md
-+ Last updated 08/25/2022
firewall Quick Create Ipgroup Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-ipgroup-template.md
-+ Last updated 05/10/2021
firewall Quick Create Multiple Ip Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-bicep.md
-+ Last updated 08/11/2022
firewall Quick Create Multiple Ip Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/quick-create-multiple-ip-template.md
-+ Last updated 08/28/2020
firewall Sample Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/sample-powershell.md
Last updated 11/19/2019 -+ # Azure Firewall PowerShell samples
The following table includes links to Azure PowerShell script samples that creat
| Sample | Description | | | -- | |[Create an Azure Firewall and test infrastructure](scripts/sample-create-firewall-test.md)|Creates an Azure Firewall and a test network infrastructure.|-----
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
Last updated 10/27/2022 -+ # Azure Firewall SNAT private IP address ranges
firewall Sql Fqdn Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/sql-fqdn-filtering.md
description: In this article, you learn how to configure SQL FQDNs in Azure Fire
+ Last updated 10/31/2022
If you use non-default ports for SQL IaaS traffic, you can configure those ports
## Next steps
-To learn about SQL proxy and redirect modes, see [Azure SQL Database connectivity architecture](/azure/azure-sql/database/connectivity-architecture).
+To learn about SQL proxy and redirect modes, see [Azure SQL Database connectivity architecture](/azure/azure-sql/database/connectivity-architecture).
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
Last updated 07/08/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-template.md
Last updated 07/12/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
Last updated 10/25/2022
+ # Create a Front Door Standard/Premium profile using Terraform
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
+ Last updated 03/10/2022 zone_pivot_groups: front-door-tiers
frontdoor Front Door Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine.md
+ Last updated 03/22/2022 zone_pivot_groups: front-door-tiers
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
Last updated 03/30/2022
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Quickstart Create Front Door Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-template.md
Last updated 09/14/2020
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
Last updated 10/25/2022
+ # Create a Front Door (classic) using Terraform
frontdoor How To Cache Purge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-cli.md
+ Last updated 09/20/2022
frontdoor How To Cache Purge Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-powershell.md
+ Last updated 09/20/2022
frontdoor How To Enable Private Link Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account-cli.md
description: Learn how to connect your Azure Front Door Premium to a Storage Acc
+ Last updated 10/04/2022
frontdoor How To Enable Private Link Web App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app-cli.md
description: Learn how to connect your Azure Front Door Premium to a webapp priv
+ Last updated 10/04/2022
frontdoor Terraform Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/terraform-samples.md
+ Last updated 11/22/2022 zone_pivot_groups: front-door-tiers
governance Machine Configuration Create Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-assignment.md
+ # How to create a machine configuration assignment using templates
governance Machine Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-custom.md
Last updated 07/15/2022
+ # Changes to behavior in PowerShell Desired State Configuration for machine configuration
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/create-management-group-azure-cli.md
Title: "Quickstart: Create a management group with the Azure CLI"
description: In this quickstart, you use the Azure CLI to create a management group to organize your resources into a resource hierarchy. Last updated 08/17/2021 + ms.tool: azure-cli # Quickstart: Create a management group with the Azure CLI
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
Title: "Quickstart: New policy assignment with Bicep file"
description: In this quickstart, you use a Bicep file to create a policy assignment to identify non-compliant resources. Last updated 03/24/2022 -+ # Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
governance Assign Policy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-template.md
Title: "Quickstart: New policy assignment with templates"
description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a policy assignment to identify non-compliant resources. Last updated 08/17/2021 -+ # Quickstart: Create a policy assignment to identify non-compliant resources by using an ARM template
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
Title: "Quickstart: New policy assignment with Terraform"
description: In this quickstart, you use Terraform and HCL syntax to create a policy assignment to identify non-compliant resources. Last updated 03/01/2023 + ms.tool: terraform # Quickstart: Create a policy assignment to identify non-compliant resources using Terraform
governance Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/scope.md
Title: Understand scope in Azure Policy
description: Describes the concept of scope in Azure Resource Manager and how it applies to Azure Policy to control which resources Azure Policy evaluates. Last updated 08/17/2021 + # Understand scope in Azure Policy
governance Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/extension-for-vscode.md
Title: Azure Policy extension for Visual Studio Code
description: Learn how to use the Azure Policy extension for Visual Studio Code to look up Azure Resource Manager aliases. Last updated 04/12/2022 -+ # Use Azure Policy extension for Visual Studio Code
governance First Query Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-powershell.md
Title: 'Quickstart: Your first PowerShell query'
description: In this quickstart, you follow the steps to enable the Resource Graph module for Azure PowerShell and run your first query. Last updated 06/15/2022 -+
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph
description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Last updated 06/15/2022 +
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
Last updated 11/11/2022
-+ # Quickstart: Paginate Azure Resource Graph query results using Azure PowerShell
governance Shared Query Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-bicep.md
Last updated 05/17/2022 -+ # Quickstart: Create a shared query using Bicep
governance Shared Query Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/shared-query-template.md
Title: 'Quickstart: Create a shared query with templates'
description: In this quickstart, you use an Azure Resource Manager template (ARM template) to create a Resource Graph shared query that counts virtual machines by OS. Last updated 08/17/2021 -+ # Quickstart: Create a shared query by using an ARM template
guides Azure Operations Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/operations/azure-operations-guide.md
description: Get started guide for Azure IT operators
tags: azure-resource-manager+
You can help secure Azure virtual networks by using a network security group. NS
## Next steps - [Create a Windows VM](../../virtual-machines/windows/quick-create-portal.md)-- [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md)
+- [Create a Linux VM](../../virtual-machines/linux/quick-create-portal.md)
hdinsight Apache Hadoop Linux Tutorial Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started-bicep.md
-+ Last updated 11/17/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Bicep
hdinsight Apache Hadoop Linux Tutorial Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-linux-tutorial-get-started.md
Title: 'Quickstart: Create Apache Hadoop cluster in Azure HDInsight using Resour
description: In this quickstart, you create Apache Hadoop cluster in Azure HDInsight using Resource Manager template -+ Last updated 08/21/2022 #Customer intent: As a data analyst, I need to create a Hadoop cluster in Azure HDInsight using Resource Manager template
hdinsight Hdinsight Use Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-sqoop.md
Title: Run Apache Sqoop jobs with Azure HDInsight (Apache Hadoop) description: Learn how to use Azure PowerShell from a workstation to run Sqoop import and export between a Hadoop cluster and an Azure SQL database. + Last updated 08/28/2022
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-bicep.md
Title: 'Quickstart: Create Apache HBase cluster using Bicep - Azure HDInsight'
description: This quickstart shows how to use Bicep to create an Apache HBase cluster in Azure HDInsight. -+ Last updated 04/14/2022 #Customer intent: As a developer new to Apache HBase on Azure, I need to see how to create an HBase cluster.
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/quickstart-resource-manager-template.md
Title: 'Quickstart: Create Apache HBase cluster using template - Azure HDInsight
description: This quickstart shows how to use Resource Manager template to create an Apache HBase cluster in Azure HDInsight. -+ Last updated 12/28/2022 #Customer intent: As a developer new to Apache HBase on Azure, I need to see how to create an HBase cluster.
hdinsight Hdinsight Autoscale Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-autoscale-clusters.md
description: Use the Autoscale feature to automatically scale Azure HDInsight cl
Previously updated : 11/30/2022 Last updated : 04/13/2023 # Automatically scale Azure HDInsight clusters
Azure HDInsight's free Autoscale feature can automatically increase or decrease
The Autoscale feature uses two types of conditions to trigger scaling events: thresholds for various cluster performance metrics (called *load-based scaling*) and time-based triggers (called *schedule-based scaling*). Load-based scaling changes the number of nodes in your cluster, within a range that you set, to ensure optimal CPU usage and minimize running cost. Schedule-based scaling changes the number of nodes in your cluster based on a schedule of scale-up and scale-down operations.
-The following video provides an overview of the challenges which Autoscale solves and how it can help you to control costs with HDInsight.
+The following video provides an overview of the challenges, which Autoscale solves and how it can help you to control costs with HDInsight.
> [!VIDEO https://www.youtube.com/embed/UlZcDGGFlZ0?WT.mc_id=dataexposed-c9-niner]
Schedule-based scaling can be used:
* When your jobs are expected to run on fixed schedules and for a predictable duration or When you anticipate low usage during specific times of the day For example, test and dev environments in post-work hours, end-of day jobs.
-Load based scaling can be used :
+Load based scaling can be used:
-* When the load patterns fluctuate substantially and unpredictably during the day. For example, Order data processing with random fluctuations in load patterns based on a variety of factors
+* When the load patterns fluctuate substantially and unpredictably during the day. For example, Order data processing with random fluctuations in load patterns based on various factors
### Cluster metrics
The above metrics are checked every 60 seconds. Autoscale makes scale-up and sca
### Load-based scale conditions
-When the following conditions are detected, Autoscale will issue a scale request:
+When the following conditions are detected, Autoscale issues a scale request:
|Scale-up|Scale-down| |||
-|Total pending CPU is greater than total free CPU for more than 3-5 minutes.|Total pending CPU is less than total free CPU for more than 5-10 minutes.|
-|Total pending memory is greater than total free memory for more than 3-5 minutes.|Total pending memory is less than total free memory for more than 5-10 minutes.|
+|Total pending CPU is greater than total free CPU for more than 3-5 minutes.|Total pending CPU is less than total free CPU for more than 3-5 minutes.|
+|Total pending memory is greater than total free memory for more than 3-5 minutes.|Total pending memory is less than total free memory for more than 3-5 minutes.|
For scale-up, Autoscale issues a scale-up request to add the required number of nodes. The scale-up is based on how many new worker nodes are needed to meet the current CPU and memory requirements.
-For scale-down, Autoscale issues a request to remove a certain number of nodes. The scale-down is based on the number of Application Master (AM) containers per node. And the current CPU and memory requirements. The service also detects which nodes are candidates for removal based on current job execution. The scale down operation first decommissions the nodes, and then removes them from the cluster.
+For scale-down, Autoscale issues a request to remove some nodes. The scale-down is based on the number of Application Master (AM) containers per node. And the current CPU and memory requirements. The service also detects which nodes are candidates for removal based on current job execution. The scale down operation first decommissions the nodes, and then removes them from the cluster.
+
+### Ambari DB sizing considerations for autoscaling
+
+It is recommended that Ambari DB is sized correctly to reap the benefits of autoscale. Customers should use the correct DB tier and use the custom Ambari DB for large size clusters. Please read the [Database and Headnode sizing recommendations](./hdinsight-custom-ambari-db.md#database-and-headnode-sizing).
### Cluster compatibility
The following table describes the cluster types and versions that are compatible
| Version | Spark | Hive | Interactive Query | HBase | Kafka | ||||||||
-| HDInsight 3.6 without ESP | Yes | Yes | Yes* | No | No |
| HDInsight 4.0 without ESP | Yes | Yes | Yes* | No | No |
-| HDInsight 3.6 with ESP | Yes | Yes | Yes* | No | No |
| HDInsight 4.0 with ESP | Yes | Yes | Yes* | No | No |
+| HDInsight 5.0 without ESP | Yes | Yes | Yes* | No | No |
+| HDInsight 5.0 with ESP | Yes | Yes | Yes* | No | No |
\* Interactive Query clusters can only be configured for schedule-based scaling, not load-based.
For more information on HDInsight cluster creation using the Azure portal, see [
#### Load-based autoscaling
-You can create an HDInsight cluster with load-based Autoscaling an Azure Resource Manager template, by adding an `autoscale` node to the `computeProfile` > `workernode` section with the properties `minInstanceCount` and `maxInstanceCount` as shown in the json snippet below. For a complete Resource Manager template see [Quickstart template: Deploy Spark Cluster with load-based autoscale enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-autoscale-loadbased).
+You can create an HDInsight cluster with load-based Autoscaling an Azure Resource Manager template, by adding an `autoscale` node to the `computeProfile` > `workernode` section with the properties `minInstanceCount` and `maxInstanceCount` as shown in the json snippet. For a complete Resource Manager template, see [Quickstart template: Deploy Spark Cluster with load-based autoscale enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-autoscale-loadbased).
```json {
You can create an HDInsight cluster with load-based Autoscaling an Azure Resourc
#### Schedule-based autoscaling
-You can create an HDInsight cluster with schedule-based Autoscaling an Azure Resource Manager template, by adding an `autoscale` node to the `computeProfile` > `workernode` section. The `autoscale` node contains a `recurrence` that has a `timezone` and `schedule` that describes when the change will take place. For a complete Resource Manager template, see [Deploy Spark Cluster with schedule-based Autoscale Enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-autoscale-schedulebased).
+You can create an HDInsight cluster with schedule-based Autoscaling an Azure Resource Manager template, by adding an `autoscale` node to the `computeProfile` > `workernode` section. The `autoscale` node contains a `recurrence` that has a `timezone` and `schedule` that describes when the change take place. For a complete Resource Manager template, see [Deploy Spark Cluster with schedule-based Autoscale Enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-autoscale-schedulebased).
```json {
To enable or disable Autoscale on a running cluster using the REST API, make a P
https://management.azure.com/subscriptions/{subscription Id}/resourceGroups/{resourceGroup Name}/providers/Microsoft.HDInsight/clusters/{CLUSTERNAME}/roles/workernode/autoscale?api-version=2018-06-01-preview ```
-Use the appropriate parameters in the request payload. The json payload below could be used to enable Autoscale. Use the payload `{autoscale: null}` to disable Autoscale.
+Use the appropriate parameters in the request payload. The following json payload could be used to enable Autoscale. Use the payload `{autoscale: null}` to disable Autoscale.
```json { "autoscale": { "capacity": { "minInstanceCount": 3, "maxInstanceCount": 5 } } }
The cluster status listed in the Azure portal can help you monitor Autoscale act
:::image type="content" source="./media/hdinsight-autoscale-clusters/hdinsight-autoscale-clusters-cluster-status.png" alt-text="Enable worker node load-based autoscale cluster status":::
-All of the cluster status messages that you might see are explained in the list below.
+All of the cluster status messages that you might see are explained in the following list.
| Cluster status | Description | |||
It can take 10 to 20 minutes for the overall scaling operation to complete. When
### Prepare for scaling down
-During the cluster scaling down process, Autoscale decommissions the nodes to meet the target size. In case of load based autoscaling, If tasks are running on those nodes, Autoscale waits until the tasks are completed for Spark and Hadoop clusters. Since each worker node also serves a role in HDFS, the temporary data is shifted to the remaining worker nodes. Make sure there's enough space on the remaining nodes to host all temporary data.
+During the cluster scaling down process, Autoscale decommissions the nodes to meet the target size. In load based autoscaling, If tasks are running on those nodes, Autoscale waits until the tasks are completed for Spark and Hadoop clusters. Since each worker node also serves a role in HDFS, the temporary data is shifted to the remaining worker nodes. Make sure there's enough space on the remaining nodes to host all temporary data.
> [!Note] > In case of schedule-based Autoscale scale-down, graceful decommission is not supported. This can cause job failures during a scale down operation, and it is recommended to plan schedules based on the anticipated job schedule patterns to include sufficient time for the ongoing jobs to conclude. You can set the schedules looking at historical spread of completion times so as to avoid job failures.
During the cluster scaling down process, Autoscale decommissions the nodes to me
You need to understand your cluster usage pattern when you configure schedule based Autoscale. [Grafana dashboard](./interactive-query/hdinsight-grafana.md) can help you understand your query load and execution slots. You can get the available executor slots and total executor slots from the dashboard.
-Here is a way you can estimate how many worker nodes will be needed. We recommend giving additional 10% buffer to handle the variation of the workload.
+Here is a way you can estimate how many worker nodes needed. We recommend giving another 10% buffer to handle the variation of the workload.
-Number of executor slots actually used = Total executor slots ΓÇô Total available executor slots.
+Number of executor slots used = Total executor slots ΓÇô Total available executor slots.
Number of worker nodes required = Number of executor slots actually used / (hive.llap.daemon.num.executors + hive.llap.daemon.task.scheduler.wait.queue.size)
Number of worker nodes required = Number of executor slots actually used / (hive
### Custom Script Actions
-Custom Script Actions are mostly used for customizing the nodes (i.e HeadNode / WorkerNodes) which enable our customers to configure certain libraries and tools which are being used by them. One common use case is the job(s) that run on the cluster might have some dependencies on the 3rd party library which is owned by the Customer, and it should be available on nodes for the job to succeed. For Autoscale we currently support custom script actions which are persisted, hence every time the new nodes get added to the cluster as part of scale up operation, these persisted script actions would get executed and post that the containers or jobs would be allocated on them. Although have custom script actions helps bootstrapping the new nodes it's advisable to keep it minimal as it would add up to the overall scale up latency and can cause impact to the scheduled jobs.
+Custom Script Actions are mostly used for customizing the nodes (i.e HeadNode / WorkerNodes) which enable our customers to configure certain libraries and tools, which are being used by them. One common use case is the job(s) that run on the cluster might have some dependencies on the third party library, which is owned by the Customer, and it should be available on nodes for the job to succeed. For Autoscale, we currently support custom script actions, which are persisted, hence every time the new nodes get added to the cluster as part of scale up operation, these persisted script actions would get executed and post that the containers or jobs would be allocated on them. Although have custom script actions helps bootstrapping the new nodes, it's advisable to keep it minimal as it would add up to the overall scale up latency and can cause impact to the scheduled jobs.
### Be aware of the minimum cluster size
Don't scale your cluster down to fewer than three nodes. Scaling your cluster to
### Azure Active Directory Domain Services (Azure AD DS) & Scaling Operations
-If you use an HDInsight cluster with Enterprise Security Package (ESP) that is joined to an Azure Active Directory Domain Services (Azure AD DS) managed domain, we recommend to throttle load on the Azure AD DS. In case of complex directory structures [scoped sync](../active-directory-domain-services/scoped-synchronization.md) we recommend to avoid impact to scaling operations.
+If you use an HDInsight cluster with Enterprise Security Package (ESP) that is joined to an Azure Active Directory Domain Services (Azure AD DS) managed domain, we recommend throttling load on the Azure AD DS. In complex directory structures [scoped sync](../active-directory-domain-services/scoped-synchronization.md) we recommend avoiding impact to scaling operations.
### Set the Hive configuration Maximum Total Concurrent Queries for the peak usage scenario
-Autoscale events don't change the Hive configuration *Maximum Total Concurrent Queries* in Ambari. This means that the Hive Server 2 Interactive Service can handle only the given number of concurrent queries at any point of time even if the Interactive Query daemons count are scaled up and down based on load and schedule. The general recommendation is to set this configuration for the peak usage scenario to avoid manual intervention.
+Autoscale events don't change the Hive configuration *Maximum Total Concurrent Queries* in Ambari. This means that the Hive Server 2 Interactive Service can handle only the given number of concurrent queries at any point of time even if the Interactive Query daemons count is scaled up and down based on load and schedule. The general recommendation is to set this configuration for the peak usage scenario to avoid manual intervention.
-However, you may experience a Hive Server 2 restart failure if there are only a small number of worker nodes and the value for maximum total concurrent queries is configured too high. At a minimum, you need the minimum number of worker nodes that can accommodate the given number of Tez Ams (equal to the Maximum Total Concurrent Queries configuration).
+However, you may experience a Hive Server 2 restart failure if there are only a few worker nodes and the value for maximum total concurrent queries is configured too high. At a minimum, you need the minimum number of worker nodes that can accommodate the given number of Tez Ams (equal to the Maximum Total Concurrent Queries configuration).
## Limitations - ### Interactive Query Daemons count
-In case of autoscale-enabled Interactive Query clusters, an autoscale up/down event also scales up/down the number of Interactive Query daemons to the number of active worker nodes. The change in the number of daemons is not persisted in the `num_llap_nodes` configuration in Ambari. If Hive services are restarted manually, the number of Interactive Query daemons is reset as per the configuration in Ambari.
+If autoscale-enabled Interactive Query clusters, an autoscale up/down event also scales up/down the number of Interactive Query daemons to the number of active worker nodes. The change in the number of daemons is not persisted in the `num_llap_nodes` configuration in Ambari. If Hive services are restarted manually, the number of Interactive Query daemons is reset as per the configuration in Ambari.
If the Interactive Query service is manually restarted, you need to manually change the `num_llap_node` configuration (the number of node(s) needed to run the Hive Interactive Query daemon) under *Advanced hive-interactive-env* to match the current active worker node count. Interactive Query Cluster supports only Schedule-Based Autoscale
hdinsight Hdinsight Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-delete-cluster.md
Title: How to delete an HDInsight cluster - Azure
description: Information on the various ways that you can delete an Azure HDInsight cluster -+ Last updated 08/26/2022
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
description: Learn how to create Apache Hadoop, Apache HBase, or Apache Spark cl
ms.tool: azure-powershell-+ Last updated 08/05/2022
hdinsight Hdinsight Hadoop Create Linux Clusters Curl Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-curl-rest.md
Title: Create Apache Hadoop clusters using Azure REST API - Azure
description: Learn how to create HDInsight clusters by submitting Azure Resource Manager templates to the Azure REST API. -+ Last updated 11/17/2022
hdinsight Hdinsight Hadoop Development Using Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-development-using-azure-resource-manager.md
Title: Migrate to Azure Resource Manager tools for HDInsight description: How to migrate to Azure Resource Manager development tools for HDInsight clusters -+ Last updated 12/23/2022
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
If you don't have an Azure subscription, [create a free account](https://azure.m
## Prerequisites
-* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
+* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
* An Azure HDInsight cluster. Currently, you can use Azure Monitor logs with the following HDInsight cluster types:
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
Title: Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kaf
description: Set up Hadoop, Kafka, Spark, or HBase clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK. -+ Last updated 03/16/2023
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
For more information, see [HDInsight 5.1.0 version](./hdinsight-51-component-ver
End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For more information, see [Spark versions supported in Azure HDInsight](./hdinsight-40-component-versioning.md#spark-versions-supported-in-azure-hdinsight)
-## Upcoming Changes
+## Coming soon
+* Autoscale
+ * Autoscale with improved latency and several improvements
* Cluster name change limitation * The max length of cluster name will be changed to 45 from 59 in Public, Mooncake and Fairfax. * Cluster permissions for secure storage
hdinsight Hdinsight Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upload-data.md
Title: Upload data for Apache Hadoop jobs in HDInsight
description: Learn how to upload and access data for Apache Hadoop jobs in HDInsight. Use Azure classic CLI, Azure Storage Explorer, Azure PowerShell, the Hadoop command line, or Sqoop. -+ Last updated 04/27/2020
hdinsight Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-bicep.md
-+ Last updated 07/19/2022 #Customer intent: As a developer new to Interactive Query on Azure, I need to see how to create an Interactive Query cluster.
hdinsight Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/quickstart-resource-manager-template.md
Title: 'Quickstart: Create Interactive Query cluster using template - Azure HDIn
description: This quickstart shows how to use Resource Manager template to create an Interactive Query cluster in Azure HDInsight. -+ Last updated 12/28/2022 #Customer intent: As a developer new to Interactive Query on Azure, I need to see how to create an Interactive Query cluster.
hdinsight Apache Kafka Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-bicep.md
-+ Last updated 07/20/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
Title: 'Quickstart: Apache Kafka using Azure Resource Manager - HDInsight'
description: In this quickstart, you learn how to create an Apache Kafka cluster on Azure HDInsight using Azure Resource Manager template. You also learn about Kafka topics, subscribers, and consumers. -+ Last updated 08/26/2022 #Customer intent: I need to create a Kafka cluster so that I can use it to process streaming data
hdinsight Apache Kafka Streams Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-streams-api.md
To build and deploy the project to your Kafka on HDInsight cluster, use the foll
3. Set up password variable. Replace `PASSWORD` with the cluster login password, then enter the command: ```bash
- export password='PASSWORD'
+ export PASSWORD='PASSWORD'
``` 4. Extract correctly cased cluster name. The actual casing of the cluster name may be different than you expect, depending on how the cluster was created. This command will obtain the actual casing, and then store it in a variable. Enter the following command: ```bash
- export clusterName=$(curl -u admin:$password -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
+ export CLUSTER_NAME=$(curl -u admin:$PASSWORD -sS -G "http://headnodehost:8080/api/v1/clusters" | jq -r '.items[].Clusters.cluster_name')
``` > [!Note]
To build and deploy the project to your Kafka on HDInsight cluster, use the foll
5. To get the Kafka broker hosts and the Apache Zookeeper hosts, use the following commands. When prompted, enter the password for the cluster login (admin) account. ```bash
- export KAFKAZKHOSTS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2);
+ export KAFKAZKHOSTS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/ZOOKEEPER/components/ZOOKEEPER_SERVER | jq -r '["\(.host_components[].HostRoles.host_name):2181"] | join(",")' | cut -d',' -f1,2);
- export KAFKABROKERS=$(curl -sS -u admin:$password -G https://$clusterName.azurehdinsight.net/api/v1/clusters/$clusterName/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
+ export KAFKABROKERS=$(curl -sS -u admin:$PASSWORD -G https://$CLUSTER_NAME.azurehdinsight.net/api/v1/clusters/$CLUSTER_NAME/services/KAFKA/components/KAFKA_BROKER | jq -r '["\(.host_components[].HostRoles.host_name):9092"] | join(",")' | cut -d',' -f1,2);
``` > [!Note]
hdinsight Apache Spark Jupyter Spark Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-sql.md
description: This quickstart shows how to use Resource Manager template to creat
Last updated 08/21/2022 -+ #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
hdinsight Apache Spark Jupyter Spark Use Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-use-bicep.md
Last updated 07/22/2022 -+ #Customer intent: As a developer new to Apache Spark on Azure, I need to see how to create a Spark cluster and query some data.
healthcare-apis Azure Api Fhir Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/azure-api-fhir-resource-manager-template.md
-+ Last updated 06/03/2022
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/customer-managed-key.md
Last updated 06/03/2022 -+ ms.devlang: azurecli
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
+ Last updated 06/03/2022
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes. ### Step 2: FHIR server integration with samples
-[Follow the steps](https://github.com/Azure-Samples/azure-health-data-services-samples/blob/main/samples/Patient%20and%20Population%20Services%20G10/docs/deployment.md) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
healthcare-apis Configure Azure Rbac Using Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac-using-scripts.md
description: This article describes how to grant permissions to users and client
+ Last updated 06/06/2022
healthcare-apis Deploy Healthcare Apis Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/deploy-healthcare-apis-using-bicep.md
Last updated 06/06/2022 -+ # Deploy Azure Health Data Services using Azure Bicep
healthcare-apis Dicom Get Access Token Azure Cli Old https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-get-access-token-azure-cli-old.md
description: This article explains how to obtain an access token for the DICOM s
+ Last updated 03/02/2022
healthcare-apis Fhir Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-bicep.md
Title: Deploy Azure Health Data Services FHIR service using Bicep
description: Learn how to deploy FHIR service by using Bicep + Last updated 05/27/2022
Remove-AzResourceGroup -Name exampleRG
In this quickstart guide, you've deployed the FHIR service within Azure Health Data Services using Bicep. For more information about FHIR service supported features, proceed to the following article: >[!div class="nextstepaction"]
->[Supported FHIR Features](fhir-features-supported.md)
+>[Supported FHIR Features](fhir-features-supported.md)
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
Title: Deploy Azure Health Data Services FHIR service using ARM template
description: Learn how to deploy FHIR service by using an Azure Resource Manager template (ARM template) + Last updated 06/06/2022
In this quickstart guide, you've deployed the FHIR service within Azure Health D
>[!div class="nextstepaction"] >[Supported FHIR Features](fhir-features-supported.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes. ### Step 2 : FHIR server integration with samples
-[Follow the steps](https://github.com/Azure-Samples/azure-health-data-services-samples/blob/main/samples/Patient%20and%20Population%20Services%20G10/docs/deployment.md) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
healthcare-apis Deploy New Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-arm.md
description: In this article, you'll learn how to deploy the MedTech service usi
+ Last updated 03/10/2023
healthcare-apis Deploy New Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-bicep-powershell-cli.md
description: In this article, you'll learn how to deploy the MedTech service usi
+ Last updated 03/10/2023
healthcare-apis Deploy New Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md
description: In this article, you'll learn about the different methods for deplo
+ Last updated 03/10/2023
healthcare-apis Deploy New Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-powershell-cli.md
description: In this article, you'll learn how to deploy the MedTech service usi
+ Last updated 03/10/2023
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
Last updated 04/07/2023+
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
Previously updated : 04/06/2023 Last updated : 04/10/2023
In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. > [!TIP]
-> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md).
+> To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md).
The following video presents an overview of the Mapping debugger:-
+>
> [!VIDEO https://youtube.com/embed/OEGuCSGnECY] ## Overview of the Mapping debugger
The following video presents an overview of the Mapping debugger:
:::image type="content" source="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png" alt-text="Screenshot of the Mapping debugger and the Save button." lightbox="media\how-to-use-mapping-debugger\mapping-debugger-save-mappings.png"::: > [!NOTE]
- > The MedTech service only saves the mappings that have been changed/updated. For example: If you only made a change to the **device mapping**, only those changes are saved to your MedTech service and no changes would be saved to the FHIR destination mapping. This is by design and to help with performance of the MedTech service.
+ > The MedTech service only saves the mappings that have been changed/updated. For example: If you only made a change to the **device mapping**, only those changes are saved to your MedTech service and no changes would be saved to the **FHIR destination mapping**. This is by design and to help with performance of the MedTech service.
4. Once the device and FHIR destination mappings are successfully saved, a confirmation from **Notifications** is created within the Azure portal.
healthcare-apis Overview Of Device Message Processing Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-message-processing-stages.md
Previously updated : 04/04/2023 Last updated : 04/07/2023
Group is the next *optional* stage where the normalized messages available from
Device identity and measurement type grouping are optional and enabled by the use of the [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. The SampledData measurement type provides a concise way to represent a time-based series of measurements from a device message into FHIR Observations. When you use the SampledData measurement type, measurements can be grouped into a single FHIR Observation that represents a 1-hour period or a 24-hour period. ## Transform
-Transform is the next stage where normalized messages are processed using the user-selected/user-created conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observation resources if a matching FHIR destination mapping has been authored. At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation being created.
+Transform is the next stage where normalized messages are processed using the user-selected/user-created conforming and valid [FHIR destination mapping](how-to-configure-fhir-mappings.md). Normalized messages get transformed into FHIR Observations if a matching FHIR destination mapping has been authored. At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the device message. These resources are added as a reference to the FHIR Observation being created.
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients, it is advised you create a virtual device resource that is specific to the patient and send the virtual device identifier in the device message payload. The virtual device can be linked to the actual device resource as a parent.
If no Device resource for a given device identifier exists in the FHIR service,
> [!NOTE] > The **Resolution type** can also be adjusted post deployment of the MedTech service if a different **Resolution type** is later required.
-The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service.
+The MedTech service provides near real-time processing and also attempts to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after approximately five minutes. When there's fewer than 300 normalized messages to be processed, there may be a delay of approximately five minutes before FHIR Observations are created or updated in the FHIR service.
> [!NOTE]
-> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the ~five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
+> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
> > For example: >
The MedTech service provides near real-time processing and also attempts to redu
> } > ``` >
-> Assuming these device messages were ingested within the same ~five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data.
+> Assuming these device messages were ingested within the same five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data.
## Persist
-Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation is new, it's created in the FHIR service. If the FHIR Observation already existed, it gets updated in the FHIR service. The FHIR service uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the FHIR service.
+Persist is the final stage where the FHIR Observations from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation is new, it's created in the FHIR service. If the FHIR Observation already existed, it gets updated in the FHIR service. The FHIR service uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the FHIR service.
## Next steps
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/06/2023 Last updated : 04/13/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an introductory overview of the MedTech service. The MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse devices and convert it into a FHIR service format. The MedTech service's device data translation capabilities make it possible to transform a wide variety of data into a unified FHIR format that provides secure data management in a cloud environment.
+This article provides an introductory overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service that enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
-The MedTech service is important because data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If this information isn't easy to access, it may have a negative effect on gaining key insights and capturing trends. The ability to transform many types of device data into a unified FHIR format enables the MedTech service to successfully link device data with other datasets to support the end user. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
+The MedTech service was built to help customers that were dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
The following video presents an overview of the MedTech service:-
+>
> [!VIDEO https://youtube.com/embed/_nMirYYU0pg] ## How the MedTech service works
-The following diagram outlines the basic elements of how the MedTech service transforms device data into a standardized FHIR resource in the cloud.
+The following diagram outlines the basic elements of how the MedTech service transforms device data into standardized [FHIR Observations](https://www.hl7.org/fhir/R4/observation.html) for persistence in the FHIR service.
:::image type="content" source="media/overview/what-is-simple-diagram.png" alt-text="Simple diagram showing the MedTech service." lightbox="media/overview/what-is-simple-diagram.png":::
-These elements are:
-
-### Deployment
-
-In order to implement the MedTech service, you need to have an Azure subscription, set up a workspace, and set up a namespace to deploy three Azure
-
-### Devices
-
-After the PaaS deployment is completed, high-velocity and low-velocity data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
-
-### Event Hubs service
+The MedTech service processes device data in five stages:
- IoT data is then sent from a device over the Internet to Event Hubs service to hold it temporarily in the cloud. The event hub can asynchronously process millions of data points per second, eliminating data traffic jams, making it possible to easily handle huge amounts of information in real-time.
+1. **Ingest** - The MedTech service asynchronously loads the device messages from the event hub at high speed.
-### The MedTech service
-
-When the device data has been loaded into Event Hubs service, the MedTech service can then process it in five stages to convert the data into a unified FHIR format.
-
-These stages are:
-
-1. **Ingest** - The MedTech service asynchronously loads the device data from the event hub at high speed.
-
-2. **Normalize** - After the data has been ingested, the MedTech service uses device mapping to streamline and translate it into a normalized schema format.
+2. **Normalize** - After the device message has been ingested, the MedTech service uses the device mapping to streamline and convert the device data into a normalized schema format.
3. **Group** - The normalized data is then grouped by parameters to prepare it for the next stage of processing. The parameters are: device identity, measurement type, time period, and (optionally) correlation ID.
-4. **Transform** - When the normalized data is grouped, it's transformed through FHIR destination mapping templates and is ready to become FHIR Observation resources.
-
-5. **Persist** - After the transformation is done, the new data is sent to FHIR service and persisted as an Observation resource.
-
-### FHIR service
+4. **Transform** - When the normalized data is grouped, it's transformed through the FHIR destination mapping and is ready to become FHIR Observations.
-The MedTech service data processing is complete when the new FHIR Observation resource is successfully persisted, saved into the FHIR service, and ready for use.
+5. **Persist** - After the transformation is done, the new data is sent to FHIR service and persisted as FHIR Observations.
## Key features of the MedTech service
-The MedTech service has many features that make it secure, configurable, scalable, and extensible.
+The MedTech service has many features that make it secure, configurable, scalable, and extensible.
### Secure
-The MedTech service delivers your data to FHIR service in Azure Health Data Services, ensuring that your data has unparalleled security and advanced threat protection. The FHIR service isolates your data in a unique database per API instance and protects it with multi-region failover. In addition, the MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
+The MedTech service delivers your device data into FHIR service, ensuring that your data has unparalleled security and advanced threat protection. The FHIR service isolates your data in a unique database per API instance and protects it with multi-region failover. In addition, the MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
### Configurable
-The MedTech service can be customized and configured by using [device](how-to-configure-device-mappings.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observation resources.
+The MedTech service can be customized and configured by using [device](how-to-configure-device-mappings.md) and [FHIR destination](how-to-configure-fhir-mappings.md) mappings to define the filtering and transformation of your data into FHIR Observations.
Useful options could include: -- Linking devices and consumers together for enhanced insights, trend capture, interoperability between systems, and proactive and remote monitoring.
+- Link devices and consumers together for enhanced insights, trend captures, interoperability between systems, and proactive and remote monitoring.
-- FHIR observation resources that can be created or updated according to existing or new templates.
+- Update or create FHIR Observations according to existing or new mapping template types.
-- Being able to choose data terms that work best for your organization and provide consistency in device data ingestion.
+- Choose data terms that work best for your organization and provide consistency in device data ingestion.
-- Facilitating customization, editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings with The [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) open-source tool.
+- Customize, edit, test, and troubleshoot MedTech service device and FHIR destination mappings with the [Mapping debugger](how-to-use-mapping-debugger.md) tool.
### Scalable
-The MedTech service enables developers to easily modify and extend the capabilities of the MedTech service to support new device mapping template types and FHIR resources.
+The MedTech service enables you to easily modify and extend the capabilities of the MedTech service to support new device mapping template types and FHIR resources.
### Integration
-The MedTech service may also be integrated into our [open-source projects](git-projects.md) for ingesting IoMT device data from these wearables:
+The MedTech service may also be integrated for ingesting device data from these wearables using our [open-source projects](git-projects.md):
- Fitbit&#174;
The following Microsoft solutions can use MedTech service for extra functionalit
In this article, you learned about the MedTech service and its capabilities.
-To learn about how the MedTech service processes device messages, see
+To learn about how the MedTech service processes device data, see
> [!div class="nextstepaction"]
-> [Overview of the MedTech service device message processing stages](overview-of-device-message-processing-stages.md)
+> [Overview of the MedTech service device data processing stages](overview-of-device-message-processing-stages.md)
To learn about the different deployment methods for the MedTech service, see
hpc-cache Az Cli Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/az-cli-prerequisites.md
Title: Azure CLI prerequisites for Azure HPC Cache
description: Setup steps before you can use Azure CLI to create or modify an Azure HPC Cache + Last updated 07/08/2020
hpc-cache Hpc Cache Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-prerequisites.md
Title: Azure HPC Cache prerequisites
description: Prerequisites for using Azure HPC Cache + Last updated 2/15/2023
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Last updated 06/16/2022
ms.tool: azure-cli-+ # This topic applies to device developers and solution builders.
az iot central device twin show --app-id <app-id> --device-id <device-id>
## Next steps
-A suggested next step is to learn [how to connect Azure IoT Edge for Linux on Windows (EFLOW)](./howto-connect-eflow.md).
+A suggested next step is to learn [how to connect Azure IoT Edge for Linux on Windows (EFLOW)](./howto-connect-eflow.md).
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
Title: Access control and security for DPS with Azure AD
description: Control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC. - Last updated 02/07/2022-+ # Control access to Azure IoT Hub Device Provisioning Service (DPS) by using Azure Active Directory (preview)
iot-dps Iot Dps Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-ip-filtering.md
-
+ Title: Microsoft Azure IoT DPS IP connection filters description: How to use IP filtering to block connections from specific IP addresses to your Azure IoT DPS instance. + Last updated 11/12/2021
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
-+ # Quickstart: Set up the IoT Hub Device Provisioning Service (DPS) with Bicep
iot-dps Quick Setup Auto Provision Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-rm.md
Last updated 01/27/2021
-+ # Quickstart: Set up the IoT Hub Device Provisioning Service (DPS) with an ARM template
iot-edge How To Install Iot Edge Ubuntuvm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm-bicep.md
Title: Run Azure IoT Edge on Ubuntu Virtual Machines by using Bicep | Microsoft
description: Azure IoT Edge setup instructions for Ubuntu LTS Virtual Machines by using Bicep + Last updated 01/05/2023 - # Run Azure IoT Edge on Ubuntu Virtual Machines by using Bicep
iot-edge How To Install Iot Edge Ubuntuvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm.md
# this is the PM responsible + Last updated 01/20/2022 - # Run Azure IoT Edge on Ubuntu Virtual Machines
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
Last updated 9/22/2022
+ # Monitor IoT Edge deployments
iot-hub Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-cli.md
Use a direct method to initiate device management actions (such as reboot, facto
* Providing status updates through *reported properties* to IoT Hub.
-You can use Azure CLI to run device twin queries to report on the progress of your device management actions.
+You can use Azure CLI to run device twin queries to report on the progress of your device management actions. For more information about using direct methods, see [Cloud-to-device communication guidance](iot-hub-devguide-c2d-guidance.md).
This article shows you how to create two Azure CLI sessions:
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
This article shows you how to:
-* Use a simulated device to report its connectivity channel as a reported property on the device twin.
+* Use a simulated device to report its connectivity channel as a *reported property* on the device twin.
* Query devices using filters on the tags and properties previously created.
+For more information about using device twin reported properties, see [Device-to-cloud communication guidance](iot-hub-devguide-d2c-guidance.md).
+ This article shows you how to create two Azure CLI sessions: * A session that creates a simulated device. The simulated device reports its connectivity channel as a reported property on the device's corresponding device twin when initialized.
iot-hub Horizontal Arm Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/horizontal-arm-route-messages.md
Last updated 08/24/2020-+ # Quickstart: Deploy an Azure IoT Hub and a storage account using an ARM template
iot-hub How To Routing Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-arm.md
Title: Create and delete routes and endpoints by using Azure Resource Manager
description: Learn how to create and delete routes and endpoints in Azure IoT Hub by using an Azure Resource Manager template in the Azure portal. + Last updated 12/15/2022
To view your new route in the Azure portal, go to your IoT Hub resource. On the
In this how-to article, you learned how to create a route and endpoint for Event Hubs, Service Bus queues and topics, and Azure Storage.
-To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
+To learn more about message routing, see [Tutorial: Send device data to Azure Storage by using IoT Hub message routing](./tutorial-routing.md?tabs=portal). In the tutorial, you create a storage route and test it with a device in your IoT hub.
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
+ Last updated 08/23/2018
az iot hub delete --name {your iot hub name} -\
Learn more about the commands available in the Microsoft Azure IoT extension for Azure CLI: * [IoT Hub-specific commands (az iot hub)](/cli/azure/iot/hub)
-* [All commands (az iot)](/cli/azure/iot)
+* [All commands (az iot)](/cli/azure/iot)
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
+ Last updated 01/16/2018+ # Use the IoT extension for Azure CLI for Azure IoT Hub device management
az iot hub query --hub-name <your hub name> \
YouΓÇÖve learned how to monitor device-to-cloud messages and send cloud-to-device messages between your IoT device and Azure IoT Hub.
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
-+ Last updated 04/11/2019
To remove an Azure Cosmos DB account from the Azure portal, go to your resource
* Learn about what else you can do with [Event Grid](../event-grid/overview.md)
-* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
+* Learn how to use Event Grid and Azure Monitor to [Monitor, diagnose, and troubleshoot device connectivity to IoT Hub](iot-hub-troubleshoot-connectivity.md)
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md
Last updated 04/02/2019 -+ # Create an IoT hub using Azure Resource Manager template (PowerShell)
To explore more capabilities of IoT Hub, see:
[lnk-sdks]: iot-hub-devguide-sdks.md
-[lnk-iotedge]: ../iot-edge/quickstart-linux.md
+[lnk-iotedge]: ../iot-edge/quickstart-linux.md
iot-hub Quickstart Bicep Route Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-bicep-route-messages.md
Last updated 05/11/2022-+ # Quickstart: Deploy an Azure IoT Hub and a storage account using Bicep
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-powershell.md
Last updated 01/27/2021 -+ #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure # Quickstart: Create a key vault using PowerShell
key-vault Vault Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/vault-create-template.md
description: This article shows how to create Azure key vaults and vault access
tags: azure-resource-manager+
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
tags: 'rotation'+
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-bicep.md
Title: Azure Quickstart - Create an Azure key vault and a key by using Bicep | M
description: Quickstart showing how to create Azure key vaults, and add key to the vaults by using Bicep. tags: azure-resource-manager+ Last updated 06/29/2022- #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.- # Quickstart: Create an Azure key vault and a key by using Bicep
In this quickstart, you created a key vault and a key using a Bicep file, and va
- Read an [Overview of Azure Key Vault](../general/overview.md) - Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-template.md
tags: azure-resource-manager
-+ Last updated 06/28/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Authorize Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/authorize-azure-resource-manager.md
description: Learn how to allow key management operations through ARM
tags: azure-resource-manager+ Last updated 11/14/2022 - # Customer intent: As a managed HSM administrator, I want to authorize Azure Resource Manager to perform key management operations via Azure Managed HSM
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/azure-policy.md
Last updated 03/31/2021 + - # Integrate Azure Managed HSM with Azure Policy (preview)
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
tags: azure-resource-manager-+ #Customer intent: As a security admin who is new to Azure, I want to create a managed HSM using an Azure Resource Manager template.
key-vault Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-bicep.md
tags: azure-resource-manager
-+ Last updated 04/08/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-template.md
tags: azure-resource-manager
-+ Last updated 04/27/2021 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure.
lighthouse Create Eligible Authorizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/create-eligible-authorizations.md
Title: Create eligible authorizations
description: When onboarding customers to Azure Lighthouse, you can let users in your managing tenant elevate their role on a just-in-time basis. Last updated 11/28/2022 + # Create eligible authorizations
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/onboard-customer.md
Title: Onboard a customer to Azure Lighthouse
description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed by users in your tenant. Last updated 11/28/2022 -+ ms.devlang: azurecli
lighthouse Update Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/update-delegation.md
Title: Update a delegation
description: Learn how to update a delegation for a customer previously onboarded to Azure Lighthouse. Last updated 06/22/2022 + # Update a delegation
load-balancer Ipv6 Configure Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/ipv6-configure-template-json.md
+ Last updated 03/31/2020
load-balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/overview.md
Previously updated : 04/14/2022- Last updated : 04/11/2023+ # What is Basic Azure Load Balancer?
load-balancer Configure Vm Scale Set Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-cli.md
Last updated 12/15/2022-+ # Configure a Virtual Machine Scale Set with an existing Azure Load Balancer using the Azure CLI
load-balancer Configure Vm Scale Set Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-powershell.md
Last updated 12/15/2022-+ ms.devlang: azurecli
load-balancer Ipv6 Configure Standard Load Balancer Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-configure-standard-load-balancer-template-json.md
Last updated 03/31/2020 -+ # Deploy an IPv6 dual stack application in Azure virtual network - Template
load-balancer Move Across Regions External Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-portal.md
Last updated 09/17/2019 -+ # Move an external load balancer to another region using the Azure portal
load-balancer Move Across Regions External Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-external-load-balancer-powershell.md
Last updated 09/17/2019 -+ # Move Azure external Load Balancer to another region using Azure PowerShell
load-balancer Move Across Regions Internal Load Balancer Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-portal.md
Last updated 09/18/2019 -+ # Move Azure internal Load Balancer to another region using the Azure portal
load-balancer Move Across Regions Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/move-across-regions-internal-load-balancer-powershell.md
Last updated 09/17/2019 -+ # Move Azure internal Load Balancer to another region using PowerShell
load-balancer Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/powershell-samples.md
Last updated 02/28/2023 -+ # Azure PowerShell Samples for Load Balancer
load-balancer Quickstart Load Balancer Standard Internal Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md
Last updated 04/29/2022-+ # Quickstart: Create an internal load balancer to load balance VMs using Bicep
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
Last updated 12/15/2022-+ # Quickstart: Create an internal load balancer to load balance VMs using an ARM template
load-balancer Quickstart Load Balancer Standard Public Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-bicep.md
Last updated 08/17/2022 -+ #Customer intent: I want to create a load balancer by using a Bicep file so that I can load balance internet traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-template.md
Last updated 12/13/2022 -+ #Customer intent: I want to create a load balancer by using an Azure Resource Manager template so that I can load balance internet traffic to VMs.
Azure PowerShell is used to deploy the template. You can also use the Azure port
1. Select **Resource groups** from the left pane.
-1. Select the resource group that you created in the previous section. The default resource group name is the project name with **`rg`** appended.
+1. Select the resource group that you created in the previous section. The default resource group name is the project name with **-rg** appended.
1. Select the load balancer. Its default name is the project name with **-lb** appended.
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
# Upgrade a basic load balancer used with Virtual Machine Scale Sets >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
The module is designed to accommodate failures, either due to unhandled errors o
## Next steps
-[Learn about Azure Load Balancer](load-balancer-overview.md)
+[Learn about Azure Load Balancer](load-balancer-overview.md)
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
# Upgrade from a basic public to standard public load balancer >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Azure Load Balancer SKUs, see [comparison table](./skus.md#skus).
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
# Upgrade an internal basic load balancer - No outbound connections required >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
[Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus).
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
# Upgrade an internal basic load balancer - Outbound connections required >[!Important]
->On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/). If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date.
A standard [Azure Load Balancer](load-balancer-overview.md) offers increased functionality and high availability through zone redundancy. For more information about Azure Load Balancer SKUs, see [Azure Load Balancer SKUs](./skus.md#skus). A standard internal Azure Load Balancer doesn't provide outbound connectivity. The PowerShell script in this article, migrates the basic load balancer configuration to a standard public load balancer.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 10/12/2022- Last updated : 04/13/2023+ # Authenticate access to Azure resources with managed identities in Azure Logic Apps
The following table lists the connectors that support using a managed identity i
| Connector type | Supported connectors | |-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
+| Managed | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure Table Storage <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
### [Standard](#tab/standard)
The following table lists the connectors that support using a managed identity i
| Connector type | Supported connectors | |-|-| | Built-in | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- Azure Queues <br>- Azure Tables <br>- HTTP <br>- HTTP + Webhook <br>- SQL Server <br><br>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
-| Managed connector | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
+| Managed | - Azure AD Identity Protection <br>- Azure App Service <br>- Azure Automation <br>- Azure Blob Storage <br>- Azure Container Instance <br>- Azure Cosmos DB <br>- Azure Data Explorer <br>- Azure Data Factory <br>- Azure Data Lake <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure IoT Central V2 <br>- Azure IoT Central V3 <br>- Azure Key Vault <br>- Azure Log Analytics <br>- Azure Queues <br>- Azure Resource Manager <br>- Azure Service Bus <br>- Azure Sentinel <br>- Azure Table Storage <br>- Azure VM <br>- HTTP with Azure AD <br>- SQL Server |
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
ms.suite: integration -+ Last updated 10/18/2022- # As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
As the logic app isn't running when these errors occur, you can't use the Kudu c
## Next steps -- [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
+- [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
logic-apps Export From Consumption To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-consumption-to-standard-logic-app.md
This article provides information about the export process and shows how to expo
## Known issues and limitations -- The following logic apps and scenarios are currently ineligible for export:-
- - Logic apps that use custom connectors
- - Logic apps that use the Azure API Management connector
- - The export tool doesn't export any infrastructure information, such as integration account settings. - The export tool can export logic app workflows with triggers that have concurrency settings. However, single-tenant Azure Logic Apps ignores these settings. - Logic apps must exist in the same region if you want to export them within the same Standard logic app project. -- For now, connectors deploy as their *managed* versions, which appear in the designer under the **Azure** tab. The export tool will have the capability to export connectors that have a built-in, service provider counterpart, when the latter gain parity with their Azure versions. The export tool automatically makes the conversion when Azure connector is available to export as a built-in, service provider connector.- - By default, connection credentials aren't cloned from source logic app workflows. Before your logic app workflows can run, you'll have to reauthenticate these connections after export.
+- By default, if an Azure connector has a built-in connector version, the export tool automatically converts the Azure connector to the built-in connector. No option exists to opt out from this behavior.
++ ## Exportable operation types | Operation | JSON type |
This article provides information about the export process and shows how to expo
- One or more logic apps to deploy to the same subscription and Azure region, for example, East US 2. -- Azure contributor subscription-level access to the subscription where the logic apps are currently deployed, not just resource group-level access.
+- Azure reader subscription-level access to the subscription where the logic apps are currently deployed.
+
+- Azure contributor resource group-level access, if you select the option for **Deploy managed connectors**.
+ - Review and meet the requirements for [how to set up Visual Studio Code with the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+> [!NOTE]
+>
+ > Make sure to install version 2.0.16 or higher for the Azure Logic Apps (Standard) extension for Visual Studio Code.
+ > Some conversion scenarios require the latest workflow designer, which is available starting with this version.
+ ## Group logic apps for export With the Azure Logic Apps (Standard) extension, you can combine multiple Consumption logic app workflows into a single Standard logic app project. In single-tenant Azure Logic Apps, one Standard logic app resource can have multiple workflows. With this approach, you can pre-validate your workflows so that you don't miss any dependencies when you select logic apps for export.
Some exported logic app workflows require post-export remediation steps to run o
If you export actions that depend on an integration account, you have to manually set up your Standard logic app with a reference link to the integration account that contains the required artifacts. For more information, review [Link integration account to a Standard logic app](logic-apps-enterprise-integration-create-integration-account.md#link-account).
+### Batch actions and settings
+
+If you export actions that use Batch actions with multiple configurations stored in an integration account, you have to manually configure your Batch actions with the correct values after export. For more information, review [Send, receive, and batch process messages in Azure Logic Apps](logic-apps-batch-process-send-receive-messages.md#create-batch-receiver).
++ ## Project folder structure After the export process finishes, your Standard logic app project contains new folders and files alongside most others in a [typical Standard logic app project](create-single-tenant-workflows-visual-studio-code.md).
logic-apps Export From Ise To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-ise-to-standard-logic-app.md
This article provides information about the export process and shows how to expo
## Known issues and limitations -- To run the export tool, you must be on the same network as your ISE. So, if your ISE is internal, you have to run the export tool from a Visual Studio Code instance that can access your ISE through the internal network. Otherwise, you can't download the exported package or files.--- The following logic apps and scenarios are currently ineligible for export:-
- - Consumption workflows in multi-tenant Azure Logic Apps
- - Logic apps that use custom connectors
- - Logic apps that use the Azure API Management connector
- - Logic apps that use the Azure Functions connector
--- The export tool doesn't export any infrastructure information, such as virtual network dependencies or integration account settings.
+- The export tool doesn't export any infrastructure information, such as integration account settings.
- The export tool can export logic app workflows with triggers that have concurrency settings. However, single-tenant Azure Logic Apps ignores these settings. -- For now, connectors with the **ISE** label deploy as their *managed* versions, which appear in the designer under the **Azure** tab. The export tool will have the capability to export **ISE** connectors as built-in, service provider connectors when the latter gain parity with their ISE versions. The export tool automatically makes the conversion when an **ISE** connector is available to export as a built-in, service provider connector.
+- Logic apps must exist in the same region if you want to export them within the same Standard logic app project.
+
+- By default, connection credentials aren't cloned from source logic app workflows. Before your logic app workflows can run, you'll have to reauthenticate these connections after export.
-- Currently, connection credentials aren't cloned from source logic app workflows. Before your logic app workflows can run, you'll have to reauthenticate these connections after export.
+- By default, if an Azure connector has a built-in connector version, the export tool automatically converts the Azure connector to the built-in connector. No option exists to opt out from this behavior.
## Exportable operation types
This article provides information about the export process and shows how to expo
## Prerequisites -- An existing ISE with the logic app workflows that you want to export.
+- One or more logic apps to deploy to the same subscription and Azure region, for example, East US 2.
-- Azure contributor subscription-level access to the ISE, not just resource group-level access.
+- Azure reader subscription-level access to the subscription where the logic apps are currently deployed.
-- To include and deploy managed connections in your workflows, you'll need an existing Azure resource group for deploying these connections. This option is recommended only for non-production environments.
+- Azure contributor resource group-level access, if Deploy managed connectors option is selected.
-- Review and meet the requirements for [how to set up Visual Studio Code with the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+- Review and meet the requirements for [how to set up Visual Studio Code with the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+
+> [!NOTE]
+>
+ > Make sure to install version 2.0.16 or higher for the Azure Logic Apps (Standard) extension for Visual Studio Code.
+ > Some conversion scenarios require the latest workflow designer, which is available starting with this version.
## Group logic apps for export
Some exported logic app workflows require post-export remediation steps to run o
If you export actions that depend on an integration account, you have to manually set up your Standard logic app with a reference link to the integration account that contains the required artifacts. For more information, review [Link integration account to a Standard logic app](logic-apps-enterprise-integration-create-integration-account.md#link-account).
+### Batch actions and settings
+
+If you export actions that use Batch actions with multiple configurations stored in an integration account, you have to manually configure your Batch actions with the correct values after export. For more information, review [Send, receive, and batch process messages in Azure Logic Apps](logic-apps-batch-process-send-receive-messages.md#create-batch-receiver).
+ ## Project folder structure After the export process finishes, your Standard logic app project contains new folders and files alongside most others in a [typical Standard logic app project](create-single-tenant-workflows-visual-studio-code.md).
logic-apps Export From Microsoft Flow Logic App Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-microsoft-flow-logic-app-template.md
ms.suite: integration + Last updated 01/23/2023
logic-apps Logic Apps Azure Resource Manager Templates Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-resource-manager-templates-overview.md
ms.suite: integration + Last updated 08/20/2022
logic-apps Logic Apps Create Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-azure-resource-manager-templates.md
ms.suite: integration + Last updated 08/20/2022
logic-apps Logic Apps Deploy Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-deploy-azure-resource-manager-templates.md
ms.suite: integration
Last updated 09/07/2022-+ ms.devlang: azurecli
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 10/12/2022 Last updated : 04/13/2023
The following table identifies the authentication types that are available on th
| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Active Directory OAuth](#azure-active-directory-oauth-authentication) | - **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server | | [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure VM, HTTP with Azure AD, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Built-in connectors**: <br><br>- **Consumption**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <br><br>- **Standard**: Azure Automation, Azure Blob Storage, Azure Event Hubs, Azure Queues, Azure Service Bus, Azure Tables, HTTP, HTTP Webhook, SQL Server <br><br>**Managed connectors**: Azure AD Identity Protection, Azure App Service, Azure Automation, Azure Blob Storage, Azure Container Instance, Azure Cosmos DB, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Event Hubs, Azure IoT Central V2, Azure IoT Central V3, Azure Key Vault, Azure Log Analytics, Azure Queues, Azure Resource Manager, Azure Service Bus, Azure Sentinel, Azure Table Storage, Azure VM, HTTP with Azure AD, SQL Server |
<a name="secure-inbound-requests"></a>
logic-apps Quickstart Create Deploy Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-azure-resource-manager-template.md
ms.suite: integration -+ Last updated 08/20/2022 #Customer intent: As a developer, I want to create and deploy an automated workflow in multi-tenant Azure Logic Apps with Azure Resource Manager templates (ARM templates).
logic-apps Quickstart Create Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-deploy-bicep.md
description: How to create and deploy a Consumption logic app workflow with Bice
ms.suite: integration -+ Last updated 08/20/2022 #Customer intent: As a developer, I want to create and deploy an automated workflow in multi-tenant Azure Logic Apps with Bicep.
logic-apps Quickstart Logic Apps Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-logic-apps-azure-powershell.md
ms.suite: integration
ms.tool: azure-powershell-+ Last updated 08/20/2022
machine-learning Image Classification Multilabel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-classification-multilabel.md
Multi-label image classification is a computer vision task where the goal is to
[Follow this link](/azure/machine-learning/reference-automl-images-cli-multilabel-classification) for a full list of configurable parameters of this component.
-[Follow this link](/machine-learning/reference-automl-images-cli-multilabel-classification) for a full list of configurable parameters of this component.
- This component creates a classification model. Because classification is a supervised learning method, you need a *labeled dataset* that includes a label column with a value for all rows. This model requires a training dataset. Validation and test datasets are optional. Follow this link to get more information on [how to prepare your dataset.](../how-to-prepare-datasets-for-automl-images.md) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
-AutoML runs a number of trials (specified in `max_trials`) in parallel (`specified in max_concurrent_trial`) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/how-to-auto-train-image-models#early-termination-policies).
+AutoML runs a number of trials (specified in `max_trials`) in parallel (`specified in max_concurrent_trial`) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/azure/machine-learning/how-to-auto-train-image-models#early-termination-policies).
1. Add the **AutoML Image Classification Multi-label** component to your pipeline.
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
It's a shared responsibility between you and Microsoft to ensure that your envir
### Compute instance
-Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. You could [query an instance's operating system version](how-to-create-manage-compute-instance.md#audit-and-observe-compute-instance-version-preview). To keep current with the latest software updates and security patches, you could:
+Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. You could [query an instance's operating system version](how-to-create-manage-compute-instance.md#audit-and-observe-compute-instance-version). To keep current with the latest software updates and security patches, you could:
1. Recreate a compute instance to get the latest OS image (recommended)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Ready to get started? [Create a workspace](#create-a-workspace).
:::image type="content" source="./media/concept-workspace/workspace.png" alt-text="Screenshot of the Azure Machine Learning workspace.":::
-## Working with workspaces
+## Tasks performed within a workspace
+
+For machine learning teams, the workspace is a place to organize their work. Below are some of the tasks you can start from a workspace:
-For machine learning teams, the workspace is a place to organize their work. To administrators, workspaces serve as containers for access management, cost management and data isolation. Below are some tips for working with workspaces:
++ [Create jobs](how-to-train-model.md) - Jobs are training runs you use to build your models. You can group jobs into [experiments](how-to-log-view-metrics.md) to compare metrics.++ [Author pipelines](concept-ml-pipelines.md) - Pipelines are reusable workflows for training and retraining your model.++ [Register data assets](concept-data.md) - Data assets aid in management of the data you use for model training and pipeline creation.++ [Register models](how-to-log-mlflow-models.md) - Once you have a model you want to deploy, you create a registered model.++ [Create online endpoints](concept-endpoints.md) - Use a registered model and a scoring script to create an online endpoint.++ [Deploy a model](./v1/how-to-deploy-and-where.md) - Use the registered model and a scoring script to deploy a model.+
+Besides grouping your machine learning results, workspaces also host resource configurations:
+++ [Compute targets](concept-compute-target.md) are used to run your experiments.++ [Datastores](how-to-datastore.md) define how you and others can connect to data sources when using data assets.++ [Security settings](tutorial-create-secure-workspace.md) - Networking, identity and access control, and encryption settings.+
+## Organizing workspaces
+
+For machine learning team leads and administrators, workspaces serve as containers for access management, cost management and data isolation. Below are some tips for organizing workspaces:
+ **Use [user roles](how-to-assign-roles.md)** for permission management in the workspace between users. For example a data scientist, a machine learning engineer or an admin.
-+ **Assign access to user groups**: By using Azure Active Directory user groups, you don't have to add individual users to each workspace and other resources the same group of users requires access to.
++ **Assign access to user groups**: By using Azure Active Directory user groups, you don't have to add individual users to each workspace, and to other resources the same group of users requires access to. + **Create a workspace per project**: While a workspace can be used for multiple projects, limiting it to one project per workspace allows for cost reporting accrued to a project level. It also allows you to manage configurations like datastores in the scope of each project. + **Share Azure resources**: Workspaces require you to create several [associated resources](#associated-resources). Share these resources between workspaces to save repetitive setup steps. + **Enable self-serve**: Pre-create and secure [associated resources](#associated-resources) as an IT admin, and use [user roles](how-to-assign-roles.md) to let data scientists create workspaces on their own. + **Share assets**: You can share assets between workspaces using [Azure Machine Learning registries (preview)](how-to-share-models-pipelines-across-workspaces-with-registries.md).
-## What content is stored in a workspace?
+## How is my content stored in a workspace?
-Your workspace keeps a history of all training runs, with logs, metrics, output, lineage metadata, and a snapshot of your scripts. As you perform tasks in Azure Machine Learning, artifacts are generated. Their metadata and data are stored in the workspace and on its [associated resources](#associated-resources).
+Your workspace keeps a history of all training runs, with logs, metrics, output, lineage metadata, and a snapshot of your scripts. As you perform tasks in Azure Machine Learning, artifacts are generated. Their metadata and data are stored in the workspace and on its associated resources.
-## Tasks performed within a workspace
+## Associated resources
-The following constructs you can find and manage within the workspace boundary.
+When you create a new workspace, you're required to bring other Azure resources to store your data. If not provided by you, these resources will automatically be created by Azure Machine Learning.
-+ [Compute targets](concept-compute-target.md) are used to run your experiments.
-+ Jobs are training runs you use to build your models. You can organize your jobs into Experiments.
-+ [Pipelines](concept-ml-pipelines.md) are reusable workflows for training and retraining your model.
-+ [Data assets](concept-data.md) aid in management of the data you use for model training and pipeline creation.
-+ Once you have a model you want to deploy, you create a registered model.
++ [Azure Storage account](https://azure.microsoft.com/services/storage/). Stores machine learning artifacts such as job logs. By default, this storage account is used when you upload data to the workspace. Jupyter notebooks that are used with your Azure Machine Learning compute instances are stored here as well.
+
+ > [!IMPORTANT]
+ > To use an existing Azure Storage account, it can't be of type BlobStorage, a premium account (Premium_LRS and Premium_GRS) and cannot have a hierarchical namespace (used with Azure Data Lake Storage Gen2). You can use premium storage or hierarchical namespace as additional storage by [creating a datastore](how-to-datastore.md).
+ > Do not enable hierarchical namespace on the storage account after upgrading to general-purpose v2.
+ > If you bring an existing general-purpose v1 storage account, you may [upgrade this to general-purpose v2](../storage/common/storage-account-upgrade.md) after the workspace has been created.
+
++ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/). Stores created docker containers, when you build custom environments via Azure Machine Learning. Scenarios that trigger creation of custom environments include AutoML when deploying models and data profiling.+
+ > [!NOTE]
+ > Workspaces can be created without Azure Container Registry as a dependency if you do not have a need to build custom docker containers. To read container images, Azure Machine Learning also works with external container registries. Azure Container Registry is automatically provisioned when you build custom docker images. Use Azure RBAC to prevent customer docker containers from being built.
+
+ > [!NOTE]
+ > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
+++ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/). Helps you monitor and collect diagnostic information from your inference endpoints.
+ :::moniker range="azureml-api-2"
+ For more information, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
+ :::moniker-end
+++ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/). Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.+
+## Create a workspace
+
+There are multiple ways to create a workspace. To get started use one of the following options:
+
+* The [Azure Machine Learning studio](quickstart-create-resources.md) lets you quickly create a workspace with default settings.
+* Use [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface with more security options.
+* Use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace) if you work in Visual Studio Code.
+
+To automate workspace creation using your preferred security settings:
+* [Azure Resource Manager / Bicep templates](how-to-create-workspace-template.md) provide a declarative syntax to deploy Azure resources. An alternative option is to use [Terraform](how-to-manage-workspace-terraform.md). Also see [How to create a secure workspace by using a template](tutorial-create-secure-workspace-template.md).
:::moniker range="azureml-api-2"
-+ Use the registered model and a scoring script to create an [online endpoint](concept-endpoints.md).
+* Use the [Azure Machine Learning CLI](how-to-configure-cli.md) or [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) for prototyping and as part of your [MLOps workflows](concept-model-management-and-deployment.md).
:::moniker-end :::moniker range="azureml-api-1"
-+ Use the registered model and a scoring script to [deploy the model](./v1/how-to-deploy-and-where.md)
+* Use the [Azure Machine Learning CLI](./v1/reference-azure-machine-learning-cli.md) or [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) for prototyping and as part of your [MLOps workflows](concept-model-management-and-deployment.md).
:::moniker-end
+* Use [REST APIs](how-to-manage-rest.md) directly in scripting environment, for platform integration or in MLOps workfows.
-## Tools for workspace interaction
+## Tools for workspace interaction and management
-You can interact with your workspace in the following ways:
+Once your workspace is set up, you can interact with it in the following ways:
+ On the web: + [Azure Machine Learning studio ](https://ml.azure.com) + [Azure Machine Learning designer](concept-designer.md) :::moniker range="azureml-api-2"
-+ In any Python environment with the [Azure Machine Learning SDK v2 for Python](https://aka.ms/sdk-v2-install).
++ In any Python environment with the [Azure Machine Learning SDK](https://aka.ms/sdk-v2-install). + On the command line using the Azure Machine Learning [CLI extension v2](how-to-configure-cli.md) :::moniker-end :::moniker range="azureml-api-1"
-+ In any Python environment with the [Azure Machine Learning SDK v1 for Python](/python/api/overview/azure/ml/)
++ In any Python environment with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/) + On the command line using the Azure Machine Learning [CLI extension v1](./v1/reference-azure-machine-learning-cli.md) :::moniker-end + [Azure Machine Learning VS Code Extension](how-to-manage-resources-vscode.md#workspaces)
-## Workspace management
-
-You can also perform the following workspace management tasks:
+The following workspace management tasks are available in each interface.
| Workspace management task | Portal | Studio | Python SDK | Azure CLI | VS Code | |-|-|-|-|-|-|
You can also perform the following workspace management tasks:
> [!WARNING] > Moving your Azure Machine Learning workspace to a different subscription, or moving the owning subscription to a new tenant, is not supported. Doing so may cause errors.
-## Create a workspace
-
-There are multiple ways to create a workspace:
-
-* Use [Azure Machine Learning studio](quickstart-create-resources.md) to quickly create a workspace with default settings.
-* Use the [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface with more options.
-* Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks.
-* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](how-to-configure-cli.md) when you need to automate or customize the creation with corporate security standards.
-* Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](./v1/reference-azure-machine-learning-cli.md) when you need to automate or customize the creation with corporate security standards.
-* If you work in Visual Studio Code, use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace).
-
-> [!NOTE]
-> The workspace name is case-insensitive.
- ## Sub resources
-These sub resources are the main resources that are made in the Azure Machine Learning workspace.
+When you create compute clusters and compute instances in Azure Machine Learning, sub resources are created.
-* VMs: provide computing power for your Azure Machine Learning workspace and are an integral part in deploying and training models.
+* VMs: provide computing power for compute instances and compute clusters, which you use to run jobs.
* Load Balancer: a network load balancer is created for each compute instance and compute cluster to manage traffic even while the compute instance/cluster is stopped. * Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks. * Bandwidth: encapsulates all outbound data transfers across regions.
-## Associated resources
-
-When you create a new workspace, you're required to bring other Azure resources to store your data:
-
-+ [Azure Storage account](https://azure.microsoft.com/services/storage/): Is used as the default datastore for the workspace. Jupyter notebooks that are used with your Azure Machine Learning compute instances are stored here as well.
-
- > [!IMPORTANT]
- > By default, the storage account is a general-purpose v1 account. You can [upgrade this to general-purpose v2](../storage/common/storage-account-upgrade.md) after the workspace has been created.
- > Do not enable hierarchical namespace on the storage account after upgrading to general-purpose v2.
-
- To use an existing Azure Storage account, it can't be of type BlobStorage or a premium account (Premium_LRS and Premium_GRS). It also can't have a hierarchical namespace (used with Azure Data Lake Storage Gen2). Neither premium storage nor hierarchical namespaces are supported with the _default_ storage account of the workspace. You can use premium storage or hierarchical namespace with _non-default_ storage accounts.
-
-+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) (ACR): When you build custom docker containers via Azure Machine Learning. For example, in the following scenarios:
- * [Azure Machine Learning environments](concept-environments.md) when training and deploying models
- :::moniker range="azureml-api-2"
- * [AutoML](concept-automated-ml.md) when deploying
- :::moniker-end
- :::moniker range="azureml-api-1"
- * [AutoML](./v1/concept-automated-ml-v1.md) when deploying
- * [Data profiling](v1/how-to-connect-data-ui.md#data-preview-and-profile)
- :::moniker-end
-
- > [!NOTE]
- > Workspaces can be created without Azure Container Registry as a dependency if you do not have a need to build custom docker containers. To read container images, Azure Machine Learning also works with external container registries. Azure Container Registry is automatically provisioned when you build custom docker images. Use Azure RBAC to prevent customer docker containers from being build.
-
- > [!NOTE]
- > If your subscription setting requires adding tags to resources under it, Azure Container Registry (ACR) created by Azure Machine Learning will fail, since we cannot set tags to ACR.
-
-+ [Azure Application Insights](https://azure.microsoft.com/services/application-insights/): Stores monitoring and diagnostics information.
- :::moniker range="azureml-api-2"
- For more information, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
- :::moniker-end
-
- > [!NOTE]
- > You can delete the Application Insights instance after cluster creation if you want. Deleting it limits the information gathered from the workspace, and may make it more difficult to troubleshoot problems. __If you delete the Application Insights instance created by the workspace, you cannot re-create it without deleting and recreating the workspace__.
-
-+ [Azure Key Vault](https://azure.microsoft.com/services/key-vault/): Stores secrets that are used by compute targets and other sensitive information that's needed by the workspace.
-
-> [!NOTE]
-> You can instead use existing Azure resource instances when you create the workspace with the [Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace) or the Azure Machine Learning CLI [using an ARM template](how-to-create-workspace-template.md).
- ## Next steps To learn more about planning a workspace for your organization's requirements, see [Organize and set up Azure Machine Learning](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-resource-organization).
machine-learning Dsvm Tools Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-ingestion.md
description: Learn about the data ingestion tools and utilities that are preinst
keywords: data science tools, data science virtual machine, tools for data science, linux data science -+
machine-learning Dsvm Tutorial Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-bicep.md
Last updated 05/02/2022 -+ # Quickstart: Create an Ubuntu Data Science Virtual Machine using Bicep
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
Last updated 06/10/2020 -+ # Quickstart: Create an Ubuntu Data Science Virtual Machine using an ARM template
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
df.head()
> 1. Find the file/folder you want to read into pandas, select the elipsis (**...**) next to it. Select from the menu **Copy URI**. You can select the **Datastore URI** to copy into your notebook/script. > :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI.":::
-You can also instantiate an Azure Machine Learning filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`, etc. The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
+You can also instantiate an Azure Machine Learning filesystem and do filesystem-like commands like `ls`, `glob`, `exists`, `open`.
+- The `ls()` method can be used to list files in the corresponding directory. You can use ls(), ls(.), ls (<<folder_level_1>/<folder_level_2>) to list files. We support both '.' and '..' in relative paths.
+- The `glob()` method supports '*' and '**' globbing.
+- The `exists()` method returns a Boolean value that indicates whether a specified file exists in current root directory.
+- The `open()` method will return a file-like object, which can be passed to any other library that expects to work with python files, or used by your own code as you would a normal python file object. These file-like objects respect the use of `with` contexts, for example:
```python from azureml.fsspec import AzureMachineLearningFileSystem
-# instantiate file system using datastore URI
-fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>')
+# instantiate file system using following URI
+fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastore/datastorename')
+
+fs.ls() # list folders/files in datastore 'datastorename'
-# list files in the path
-fs.ls()
# output example:
-# /datastore_name/folder/file1.csv
-# /datastore_name/folder/file2.csv
+# folder1
+# folder2
+# file3.csv
# use an open context
-with fs.open('/datastore_name/folder/file1.csv') as f:
+with fs.open('./folder1/file1.csv') as f:
# do some process process_file(f) ```
+### Upload files via AzureMachineLearningFileSystem
+
+```python
+from azureml.fsspec import AzureMachineLearningFileSystem
+# instantiate file system using following URI
+fs = AzureMachineLearningFileSystem('azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastore/datastorename')
+
+# you can specify recursive as False to upload a file
+fs.upload(lpath='data/upload_files/crime-spring.csv', rpath='data/fsspec', recursive=False, **{'overwrite': MERGE_WITH_OVERWRITE})
+
+# you need to specify recursive as True to upload a folder
+fs.upload(lpath='data/upload_folder/', rpath='data/fsspec_folder', recursive=True, **{'overwrite': MERGE_WITH_OVERWRITE})
+```
+`lpath` is the local path, and `rpath` is the remote path.
+If the folders you specify in `rpath` do not exist yet, we will create the folders for you.
+
+We support 3 modes for 'overwrite':
+- APPEND: if there is already a file with the same name in the destination path, will keep the original file
+- FAIL_ON_FILE_CONFLICT: if there is already a file with the same name in the destination path, will throw an error
+- MERGE_WITH_OVERWRITE: if there is already a file with the same name in the destination path, will overwrite with the new file
+
+### Download files via AzureMachineLearningFileSystem
+```python
+# you can specify recursive as False to download a file
+# downloading overwrite option is determined by local system, and it is MERGE_WITH_OVERWRITE
+fs.download(rpath='data/fsspec/crime-spring.csv', lpath='data/download_files/, recursive=False)
+
+# you need to specify recursive as True to download a folder
+fs.download(rpath='data/fsspec_folder', lpath='data/download_folder/', recursive=True)
+```
+ ### Examples In this section we provide some examples of how to use Filesystem spec, for some common scenarios.
import pandas as pd
from azureml.fsspec import AzureMachineLearningFileSystem # define the URI - update <> placeholders
-uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.csv'
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>'
# create the filesystem fs = AzureMachineLearningFileSystem(uri) # append csv files in folder to a list dflist = []
-for path in fs.ls():
+for path in fs.glob('/<folder>/*.csv'):
with fs.open(path) as f: dflist.append(pd.read_csv(f))
import pandas as pd
from azureml.fsspec import AzureMachineLearningFileSystem # define the URI - update <> placeholders
-uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/*.parquet'
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>'
# create the filesystem fs = AzureMachineLearningFileSystem(uri)
-# append csv files in folder to a list
+# append parquet files in folder to a list
dflist = []
-for path in fs.ls():
+for path in fs.glob('/<folder>/*.parquet'):
with fs.open(path) as f: dflist.append(pd.read_parquet(f))
from PIL import Image
from azureml.fsspec import AzureMachineLearningFileSystem # define the URI - update <> placeholders
-uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<image.jpeg>'
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>'
# create the filesystem fs = AzureMachineLearningFileSystem(uri)
-with fs.open() as f:
+with fs.open('/<folder>/<image.jpeg>') as f:
img = Image.open(f)
- img.show()
+ img.show()
``` #### PyTorch custom dataset example
from azureml.fsspec import AzureMachineLearningFileSystem
from torch.utils.data import DataLoader # define the URI - update <> placeholders
-uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/'
+uri = 'azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>'
# create the filesystem fs = AzureMachineLearningFileSystem(uri)
fs = AzureMachineLearningFileSystem(uri)
# create the dataset training_data = CustomImageDataset( filesystem=fs,
- annotations_file='<datastore_name>/<path>/annotations.csv',
- img_dir='<datastore_name>/<path_to_images>/'
+ annotations_file='/annotations.csv',
+ img_dir='/<path_to_images>/'
) # Preparing your data for training with DataLoaders
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Last updated 08/01/2022-+
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
description: 'Explain network isolation changes with our new API platform on Azu
+
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Title: Use connections
+ Title: Use connections (preview)
description: Learn how to use connections to connect to External data sources for training with Azure Machine Learning.
# Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
-# Create connections
+# Create connections (preview)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
In this article, learn how to connect to data sources located outside of Azure,
- Amazon S3 - Azure SQL DB + ## Prerequisites - An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
ml_client.connections.create_or_update(workspace_connection=wps_connection)
## Next steps -- [Import data assets](how-to-import-data-assets.md#import-data-assets)
+- [Import data assets](how-to-import-data-assets.md)
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
To create a compute instance, you'll need permissions for the following actions:
* *Microsoft.MachineLearningServices/workspaces/computes/write* * *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
-### Audit and observe compute instance version (preview)
+### Audit and observe compute instance version
Once a compute instance is deployed, it does not get automatically updated. Microsoft [releases](azure-machine-learning-ci-image-release-notes.md) new VM images on a monthly basis. To understand options for keeping recent with the latest version, see [vulnerability management](concept-vulnerability-management.md#compute-instance). To keep track of whether an instance's operating system version is current, you could query its version using the Studio UI. In your workspace in Azure Machine Learning studio, select Compute, then select compute instance on the top. Select a compute instance's compute name to see its properties including the current operating system. Enable 'audit and observe compute instance os version' under the previews management panel to see these preview properties.
-Administrators can use [Azure Policy](./../governance/policy/overview.md) definitions to audit instances that are running on outdated operating system versions across workspaces and subscriptions. The following is a sample policy:
+Administrators can use [Azure Policy](policy-reference.md) definitions to audit instances that are running on outdated operating system versions across workspaces and subscriptions. The following is a sample policy:
```json {
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
-+
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
Last updated 11/11/2022 -+ # Use Azure Pipelines with Azure Machine Learning
steps:
## Clean up resources
-If you're not going to continue to use your pipeline, delete your Azure DevOps project. In Azure portal, delete your resource group and Azure Machine Learning instance.
+If you're not going to continue to use your pipeline, delete your Azure DevOps project. In Azure portal, delete your resource group and Azure Machine Learning instance.
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
Title: Import Data
+ Title: Import Data (preview)
description: Learn how to import data from external sources on to Azure Machine Learning platform
Previously updated : 04/11/2023 Last updated : 04/12/2023
-# Import data assets
+# Import data assets (preview)
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] > [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
The caching feature involves upfront compute and storage costs. However, it pays
You can now import data from Snowflake, Amazon S3 and Azure SQL. + ## Prerequisites To create and work with data assets, you need:
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
To create a new file in a different folder:
> [!IMPORTANT] > Content in notebooks and scripts can potentially read data from your sessions and access data without your organization in Azure. Only load files from trusted sources. For more information, see [Secure code best practices](concept-secure-code-best-practice.md#azure-machine-learning-studio-notebooks).
+## Customize your file editing experience
+
+In the Azure Machine Learning studio file editor, you can customize your editing experience with Command Palette and relevant keyboard shortcuts. When you invoke the Command Palette, you will see a selectable list of many options to customize your editing experience.
++
+To invoke the Command Palette on a file, either use **F1** or right-select anywhere in the editing space and select **Command Palette** from the menu.
+
+For example, choose "Indent using spaces" if you want your editor to auto-indent with spaces instead of tabs. Take a few moments to explore the different options you have in the Command Palette.
++ ## Manage files with Git [Use a compute instance terminal](how-to-access-terminal.md#git) to clone and manage Git repositories. To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md).
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
description: Learn how to manage Azure Machine Learning workspaces using Terrafo
+
There are several options to connect to your private link endpoint workspace. To
* To learn more about network configuration options, see [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](./how-to-network-security-overview.md). * For alternative Azure Resource Manager template-based deployments, see [Deploy resources with Resource Manager templates and Resource Manager REST API](../azure-resource-manager/templates/deploy-rest.md).
-* For information on how to keep your Azure Machine Learning up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
+* For information on how to keep your Azure Machine Learning up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Title: Create a Training Job with the job creation UI
-description: Learn how to use the job creation UI in Azure Machine Learning studio to create a training job.
+description: Learn how to submit a training job in Azure Machine Learning studio
--++ Last updated 11/04/2022
-# Create a training job with the job creation UI (preview)
+# Submit a training job in Studio (preview)
-There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with the job creation UI in Azure Machine Learning studio.
+There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with a guided experience for submitting training jobs in Azure Machine Learning studio.
[!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
There are many ways to create a training job with Azure Machine Learning. You ca
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com). 1. Select your subscription and workspace.+
+* Navigate to the Azure Machine Learning Studio and enable the feature by clicking open the preview panel.
+[![Azure Machine Learning studio preview panel allowing users to enable preview features.](media/how-to-train-with-ui/preview-panel.png)](media/how-to-train-with-ui/preview-panel.png)
+ * You may enter the job creation UI from the homepage. Click **Create new** and select **Job**. [![Azure Machine Learning studio homepage](media/how-to-train-with-ui/home-entry.png)](media/how-to-train-with-ui/home-entry.png)
-* Or, you may enter the job creation from the left pane. Click **+New** and select **Job**.
-[![Azure Machine Learning studio left navigation](media/how-to-train-with-ui/left-nav-entry.png)](media/how-to-train-with-ui/left-nav-entry.png)
+In this wizard, you can select your method of training, complete the rest of the submission wizard based on your selection, and submit the training job. Below we will walk through the wizard for running a custom script (command job).
+
+[![Azure Machine Learning studio wizard landing page for users to choose method of training.](media/how-to-train-with-ui/training-method.png)](media/how-to-train-with-ui/training-method.png)
+
+## Configure basic settings
+
+The first step is configuring basic information about your training job. You can proceed next if you're satisfied with the defaults we have chosen for you or make changes to your desired preference.
+
+[![Azure Machine Learning studio job submission wizard for users to configure their basic settings.](media/how-to-train-with-ui/basic-settings.png)](media/how-to-train-with-ui/basic-settings.png)
+
+These are the fields available:
+
+|Field| Description|
+|| |
+|Job name| The job name field is used to uniquely identify your job. It's also used as the display name for your job.|
+|Experiment name| This helps organize the job in Azure Machine Learning studio. Each job's run record will be organized under the corresponding experiment in the studio's "Experiment" tab. By default, Azure will put the job in the **Default** experiment.|
+|Description| Add some text describing your job, if desired.|
+|Timeout| Specify number of hours the entire training job is allowed to run. Once this limit is reached the system will cancel the job including any child jobs.|
+|Tags| Add tags to your job to help with organization.|
+
+## Training script
+
+Next step is to upload your source code, configure any inputs or outputs required to execute the training job, and specify the command to execute your training script.
+
+This can be a code file or a folder from your local machine or workspace's default blob storage. Azure will show the files to be uploaded after you make the selection.
+
+|Field| Description|
+|| |
+|Code| This can be a file or a folder from your local machine or workspace's default blob storage as your training script. Studio will show the files to be uploaded after you make the selection.|
+|Inputs| Specify as many inputs as needed of the following types data, integer, number, boolean, string). |
+|Command| The command to execute. Command-line arguments can be explicitly written into the command or inferred from other sections, specifically **inputs** using curly braces notation, as discussed in the next section.|
++
+### Code
+The command is run from the root directory of the uploaded code folder. After you select your code file or folder, you can see the files to be uploaded. Copy the relative path to the code containing your entry point and paste it into the box labeled **Enter the command to start the job**.
+
+If the code is in the root directory, you can directly refer to it in the command. For instance, `python main.py`.
+
+If the code isn't in the root directory, you should use the relative path. For example, the structure of the [word language model](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/pytorch/word-language-model) is:
+
+```tree
+.
+Γö£ΓöÇΓöÇ job.yml
+Γö£ΓöÇΓöÇ data
+ΓööΓöÇΓöÇ src
+ ΓööΓöÇΓöÇ main.py
+```
+Here, the source code is in the `src` subdirectory. The command would be `python ./src/main.py` (plus other command-line arguments).
+
+[![Image of referencing your code in the command in the training job submission wizard.](media/how-to-train-with-ui/training-script-code.png)](media/how-to-train-with-ui/training-script-code.png)
+
+### Inputs
+When you use an input in the command, you need to specify the input name. To indicate an input variable, use the form `${{inputs.input_name}}`. For instance, `${{inputs.wiki}}`. You can then refer to it in the command, for instance, `--data ${{inputs.wiki}}`.
-These options will all take you to the job creation panel, which has a wizard for configuring and creating a training job.
+[![Image of referencing your inputs in the command in the training job submission wizard.](media/how-to-train-with-ui/training-script-inputs.png)](media/how-to-train-with-ui/training-script-inputs.png)
## Select compute resources
-The first step in the job creation UI is to select the compute target on which you'd like your job to run. The job creation UI supports several compute types:
+Next step is to select the compute target on which you'd like your job to run. The job creation UI supports several compute types:
| Compute Type | Introduction | | | |
The first step in the job creation UI is to select the compute target on which y
1. Select an existing compute resource. The dropdown shows the node information and SKU type to help your choice. 1. For a compute cluster or a Kubernetes cluster, you may also specify how many nodes you want for the job in **Instance count**. The default number of instances is 1. 1. When you're satisfied with your choices, choose **Next**.
- [![Select a compute cluster](media/how-to-train-with-ui/compute-cluster.png)](media/how-to-train-with-ui/compute-cluster.png)
-
-If you're using Azure Machine Learning for the first time, you'll see an empty list and a link to create a new compute.
-
- [![Create a new compute instance](media/how-to-train-with-ui/create-new-compute.png)](media/how-to-train-with-ui/create-new-compute.png)
+ [![Select a compute cluster dropdown selector image.](media/how-to-train-with-ui/compute.png)](media/how-to-train-with-ui/compute.png)
-For more information on creating the various types, see:
+If you're using Azure Machine Learning for the first time, you'll see an empty list and a link to create a new compute. For more information on creating the various types, see:
| Compute Type | How to | | | |
After selecting a compute target, you need to specify the runtime environment fo
Curated environments are Azure-defined collections of Python packages used in common ML workloads. Curated environments are available in your workspace by default. These environments are backed by cached Docker images, which reduce the job preparation overhead. The cards displayed in the "Curated environments" page show details of each environment. To learn more, see [curated environments in Azure Machine Learning](resource-curated-environments.md).
- [![Curated environments](media/how-to-train-with-ui/curated-env.png)](media/how-to-train-with-ui/curated-env.png)
+ [![Image of curated environments selector page showing various environment cards.](media/how-to-train-with-ui/curated-environments.png)](media/how-to-train-with-ui/curated-environments.png)
### Custom environments
Custom environments are environments you've specified yourself. You can specify
### Container registry image
-If you don't want to use the Azure Machine Learning curated environments or specify your own custom environment, you can use a docker image from a public container registry such as [Docker Hub](https://hub.docker.com/). If the image is in a private container, toggle **This is a private container registry**. For private registries, you will need to enter a valid username and password so Azure can get the image.
-[![Container registry image](media/how-to-train-with-ui/container-registry-image.png)](media/how-to-train-with-ui/container-registry-image.png)
-
-## Configure your job
-
-After specifying the environment, you can configure your job with more settings.
-
-|Field| Description|
-|| |
-|Job name| The job name field is used to uniquely identify your job. It's also used as the display name for your job. Setting this field is optional; Azure will generate a GUID name for the job if you don't enter anything. Note: the job name must be unique.|
-|Experiment name| This helps organize the job in Azure Machine Learning studio. Each job's run record will be organized under the corresponding experiment in the studio's "Experiment" tab. By default, Azure will put the job in the **Default** experiment.|
-|Code| You can upload a code file or a folder from your machine, or upload a code file from the workspace's default blob storage. Azure will show the files to be uploaded after you make the selection. |
-|Command| The command to execute. Command-line arguments can be explicitly written into the command or inferred from other sections, specifically **inputs** using curly braces notation, as discussed in the next section.|
-|Inputs| Specify the input binding. We support three types of inputs: 1) Azure Machine Learning registered dataset; 2) workspace default blob storage; 3) upload local file. You can add multiple inputs. |
-|Environment variables| Setting environment variables allows you to provide dynamic configuration of the job. You can add the variable and value here.|
-|Tags| Add tags to your job to help with organization.|
-
-### Specify code and inputs in the command box
+If you don't want to use the Azure Machine Learning curated environments or specify your own custom environment, you can use a docker image from a public container registry such as [Docker Hub](https://hub.docker.com/).
-#### Code
-
-The command is run from the root directory of the uploaded code folder. After you select your code file or folder, you can see the files to be uploaded. Copy the relative path to the code containing your entry point and paste it into the box labeled **Enter the command to start the job**.
-
-If the code is in the root directory, you can directly refer to it in the command. For instance, `python main.py`.
-
-If the code isn't in the root directory, you should use the relative path. For example, the structure of the [word language model](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/pytorch/word-language-model) is:
-
-```tree
-.
-Γö£ΓöÇΓöÇ job.yml
-Γö£ΓöÇΓöÇ data
-ΓööΓöÇΓöÇ src
- ΓööΓöÇΓöÇ main.py
-```
-Here, the source code is in the `src` subdirectory. The command would be `python ./src/main.py` (plus other command-line arguments).
-
-[![Refer code in the command](media/how-to-train-with-ui/code-command.png)](media/how-to-train-with-ui/code-command.png)
-
-#### Inputs
-
-When you use an input in the command, you need to specify the input name. To indicate an input variable, use the form `${{inputs.input_name}}`. For instance, `${{inputs.wiki}}`. You can then refer to it in the command, for instance, `--data ${{inputs.wiki}}`.
-
-[![Refer input name in the command](media/how-to-train-with-ui/input-command-name.png)](media/how-to-train-with-ui/input-command-name.png)
## Review and Create Once you've configured your job, choose **Next** to go to the **Review** page. To modify a setting, choose the pencil icon and make the change.
+ [![Azure Machine Learning studio job submission review pane image to validate selections before submission.](media/how-to-train-with-ui/review.png)](media/how-to-train-with-ui/review.png)
-You may choose **view the YAML spec** to review and download the yaml file generated by this job configuration. This job yaml file can be used to submit the job from the CLI (v2). (See [Train models (create jobs) with the CLI (v2)](how-to-train-cli.md).)
-[![view yaml spec](media/how-to-train-with-ui/view-yaml.png)](media/how-to-train-with-ui/view-yaml.png)
-[![Yaml spec](media/how-to-train-with-ui/yaml-spec.png)](media/how-to-train-with-ui/yaml-spec.png)
-
-To launch the job, choose **Create**. Once the job is created, Azure will show you the job details page, where you can monitor and manage your training job.
+To launch the job, choose **Submit training job**. Once the job is created, Azure will show you the job details page, where you can monitor and manage your training job.
[!INCLUDE [Email Notification Include](../../includes/machine-learning-email-notifications.md)]
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
You can either install the latest [Azure CLI](/cli/azure/install-azure-cli), or
To install the Event Grid extension, use the following command from the CLI: ```azurecli-interactive
-az add extension --name eventgrid
+az extension add --name eventgrid
``` The following example demonstrates how to select an Azure subscription and creates e a new event subscription for Azure Machine Learning:
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
+ Last updated 06/06/2022
machine-learning How To Convert Ml Experiment To Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-convert-ml-experiment-to-production.md
The `train_aml.py` file found in the `diabetes_regression/training` directory in
### Create Python file for the Diabetes Ridge Regression Scoring notebook
-Covert your notebook to an executable script by running the following statement in a command prompt that which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Scoring.ipynb`:
+Convert your notebook to an executable script by running the following statement in a command prompt that which uses the `nbconvert` package and the path of `experimentation/Diabetes Ridge Regression Scoring.ipynb`:
``` jupyter nbconvert "Diabetes Ridge Regression Scoring.ipynb" --to script --output score
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
-+
managed-grafana Quickstart Managed Grafana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/quickstart-managed-grafana-cli.md
Last updated 12/13/2022 ms.devlang: azurecli-+ # Quickstart: Create an Azure Managed Grafana instance using the Azure CLI
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/manage-resources-cli.md
Last updated 11/02/2021 -+ keywords: azure resource manager cli
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
-+ Last updated 06/24/2022
mariadb Quickstart Create Mariadb Server Database Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-bicep.md
-+ Last updated 06/24/2022
migrate Add Server Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/add-server-credentials.md
ms. Previously updated : 11/13/2022 Last updated : 04/13/2023
The table below lists the permissions required on the server credentials provide
Feature | Windows credentials | Linux credentials | | **Software inventory** | Guest user account | Regular/normal user account (non-sudo access permissions)
-**Discovery of SQL Server instances and databases** | User account that is member of the sysadmin server role. | _Not supported currently_
+**Discovery of SQL Server instances and databases** | User account that is a member of the sysadmin server role or has [these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.| _Not supported currently_
**Discovery of ASP.NET web apps** | Domain or non-domain (local) account with administrative permissions | _Not supported currently_ **Agentless dependency analysis** | Domain or non-domain (local) account with administrative permissions | Sudo user account with permissions to execute ls and netstat commands. If you are providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time the sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
Feature | Windows credentials | Linux credentials
## Next steps
-Review the tutorials for discovery of servers running in your [VMware environment](tutorial-discover-vmware.md) or [Hyper-V environment](tutorial-discover-hyper-v.md) or for [discovery of physical servers](tutorial-discover-physical.md)
+Review the tutorials for discovery of servers running in your [VMware environment](tutorial-discover-vmware.md) or [Hyper-V environment](tutorial-discover-hyper-v.md) or for [discovery of physical servers](tutorial-discover-physical.md).
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md
ms. Previously updated : 08/19/2022 Last updated : 04/13/2023
This discovery process is agentless that is, nothing is installed on the target
- Created an [Azure Migrate project](./create-manage-projects.md) before the announcement of SQL and web apps assessment feature for your region - Added the [Azure Migrate: Discovery and assessment](./how-to-assess.md) tool to a project - Review [app-discovery support and requirements](./migrate-support-matrix-vmware.md#vmware-requirements).-- In case you're discovering assets on VMware environment then, Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
+- In case you're discovering assets on a VMware environment, make sure the servers where you're running app discovery have PowerShell version 2.0 or later installed, and VMware tools (later than 10.2.0) installed.
- Check the [requirements](./migrate-appliance.md) for deploying the Azure Migrate appliance. - Verify that you have the [required roles](./create-manage-projects.md#verify-permissions) in the subscription to create resources. - Ensure that your appliance has access to the internet > [!Note]
-> Even though the processes in this document are covered for VMware, the processes are similar for Microsoft Hyper-V and Physical environment.
-> Discovery and assessment for SQL Server instances and databases is available across the Microsoft Hyper-V and Physical environment also.
+> Though the procedure described in this article is for VMware, the processes are similar for Microsoft Hyper-V and Physical environments.
+> Discovery and assessment for SQL Server instances and databases is available across the Microsoft Hyper-V and Physical environments.
## Enable discovery of web apps and SQL Server instances and databases
This discovery process is agentless that is, nothing is installed on the target
- Validate that the services running on the appliance are updated to the latest versions. To do so, launch the Appliance configuration manager from your appliance server and select view appliance services from the Setup prerequisites panel. - Appliance and its components are automatically updated :::image type="content" source="./media/how-to-discover-sql-existing-project/appliance-services-version.png" alt-text="Check the appliance version":::
- - In the manage credentials and discovery sources panel of the Appliance configuration manager, add Domain or SQL Server Authentication credentials that have Sysadmin access on the SQL Server instance and databases to be discovered.
+ - In the manage credentials and discovery sources panel of the Appliance configuration manager, add Domain or SQL Server Authentication credentials that have Sysadmin access on the SQL Server instance and databases to be discovered or have [these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
- Web apps discovery works with both domain and non-domain Windows OS credentials as long as the account used has local admin privileges on servers. You can leverage the automatic credential-mapping feature of the appliance, as highlighted [here](./tutorial-discover-vmware.md#start-continuous-discovery).
This discovery process is agentless that is, nothing is installed on the target
## Next steps -- Learn how to create an [Azure SQL assessment](./how-to-create-azure-sql-assessment.md)-- Learn more about [Azure SQL assessments](./concepts-azure-sql-assessment-calculation.md)-- Learn how to create an [Azure App Service assessment](./how-to-create-azure-app-service-assessment.md)-- Learn more about [Azure App Service assessments](./concepts-azure-webapps-assessment-calculation.md)
+- Learn how to create an [Azure SQL assessment](./how-to-create-azure-sql-assessment.md).
+- Learn more about [Azure SQL assessments](./concepts-azure-sql-assessment-calculation.md).
+- Learn how to create an [Azure App Service assessment](./how-to-create-azure-app-service-assessment.md).
+- Learn more about [Azure App Service assessments](./concepts-azure-webapps-assessment-calculation.md).
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms. Previously updated : 03/08/2023 Last updated : 04/13/2023 ms.cutom: engagement-fy23
Support | Details
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
**SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported. **Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
Support | Details
> > However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+### Configure the custom login for SQL Server discovery
+
+The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+
+#### Windows Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+
+#### SQL Server Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ , SID = @SID
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '+@SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+ ## Web apps discovery requirements [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have a web server installed, Azure Migrate discovers web apps on the server.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
ms. Previously updated : 03/21/2023 Last updated : 04/13/2023
Support | Details
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
**SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported. **Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
Support | Details
> > However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+### Configure the custom login for SQL Server discovery
+
+The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+
+#### Windows Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+
+#### SQL Server Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ , SID = @SID
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '+@SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+ ## Web apps discovery requirements [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have a web server installed, Azure Migrate discovers web apps on the server.
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 03/08/2023 Last updated : 04/13/2023
Support | Details
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | Azure Migrate requires a Windows user account that is a member of the sysadmin server role.
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
**SQL Server versions** | SQL Server 2008 and later are supported. **SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported. **Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
Support | Details
> > However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+### Configure the custom login for SQL Server discovery
+
+The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+
+#### Windows Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+
+#### SQL Server Authentication
+
+ ```sql
+ -- Create a login to run the assessment
+ use master;
+ -- If a SID needs to be specified, add here
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ , SID = @SID
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '+@SID
+ ELSE
+ PRINT N'Login creation failed'
+ GO
+
+ -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
+ use master;
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ GO
+
+ -- Provide server level read-only permissions
+ use master;
+ BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Required from SQL 2014 onwards for database connectivity.
+ use master;
+ BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Provide msdb specific permissions
+ use msdb;
+ BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GO
+
+ -- Clean up
+ --use master;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
+ -- BEGIN TRY DROP LOGIN [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ --GO
+ ```
+ ## Web apps discovery requirements [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server has a web server installed, Azure Migrate discovers web apps on the server.
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/quickstart-create-migrate-project.md
ms. -+ # Quickstart: Create an Azure Migrate project using an ARM template
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
In this tutorial, you learn how to:
## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.- - Before you follow this tutorial to assess your SQL Server instances for migration to Azure SQL, make sure you've discovered the SQL instances you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md). - If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 03/16/2023 Last updated : 04/13/2023 #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
Before you start this tutorial, check you have these prerequisites in place.
**Hyper-V host** | Hyper-V hosts on which servers are located can be standalone, or in a cluster.<br/><br/> The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull server metadata and performance data, using a Common Information Model (CIM) session. **Appliance deployment** | Hyper-V host needs resources to allocate a server for the appliance:<br/><br/> - 16 GB of RAM, 8 vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on the appliance, directly or via a proxy. **Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-hyper-v.md#dependency-analysis-requirements-agentless).<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).For discovery of installed applications and for agentless dependency analysis, Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](migrate-support-matrix-hyper-v.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
## Prepare an Azure user account
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md
ms. Previously updated : 03/16/2023 Last updated : 04/13/2023 #Customer intent: As a server admin I want to discover my on-premises server inventory.
Before you start this tutorial, ensure you have these prerequisites in place.
**Appliance** | You need a server to run the Azure Migrate appliance. The server should have:<br/><br/> - Windows Server 2016 installed.<br/> _(Currently the deployment of appliance is only supported on Windows Server 2016.)_<br/><br/> - 16-GB RAM, 8 vCPUs, around 80 GB of disk storage<br/><br/> - A static or dynamic IP address, with internet access, either directly or through a proxy.<br/><br/> - Outbound internet connectivity to the required [URLs](migrate-appliance.md#url-access) from the appliance. **Windows servers** | Allow inbound connections on WinRM port 5985 (HTTP) for discovery of Windows servers.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). **Linux servers** | Allow inbound connections on port 22 (TCP) for discovery of Linux servers.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](migrate-support-matrix-physical.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
> [!NOTE] > It is unsupported to install the Azure Migrate Appliance on a server that has the [replication appliance](migrate-replication-appliance.md) or mobility service agent installed. Ensure that the appliance server has not been previously used to set up the replication appliance or has the mobility service agent installed on the server.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 03/16/2023 Last updated : 04/13/2023 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
Requirement | Details
**vCenter Server/ESXi host** | You need a server running vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy. **Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
+**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
## Prepare an Azure user account
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 03/03/2023 Last updated : 04/13/2023
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
Last updated 11/21/2022 -+
mysql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-server-cli.md
+ Last updated 9/21/2020
az mysql flexible-server delete --resource-group myresourcegroup --name mydemose
- [Learn how to start or stop a server](how-to-stop-start-server-portal.md) - [Learn how to manage a virtual network](how-to-manage-virtual-network-cli.md) - [Troubleshoot connection issues](how-to-troubleshoot-common-connection-issues.md)-- [Create and manage firewall](how-to-manage-firewall-cli.md)
+- [Create and manage firewall](how-to-manage-firewall-cli.md)
mysql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-manage-virtual-network-cli.md
+ Last updated 9/21/2020
Refer to the Azure CLI [reference documentation](/cli/azure/mysql/flexible-serve
## Next steps - Learn more about [networking in Azure Database for MySQL - Flexible Server](./concepts-networking.md). - [Create and manage Azure Database for MySQL - Flexible Server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).-- Understand more about [Azure Database for MySQL - Flexible Server virtual network](./concepts-networking-vnet.md#private-access-vnet-integration).
+- Understand more about [Azure Database for MySQL - Flexible Server virtual network](./concepts-networking-vnet.md#private-access-vnet-integration).
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
Title: Restart/Stop/start - Azure portal - Azure Database for MySQL - Flexible S
description: This article describes how to restart/stop/start operations in Azure Database for MySQL through the Azure CLI. +
az mysql flexible-server restart
## Next steps - Learn more about [networking in Azure Database for MySQL - Flexible Server](./concepts-networking.md) - [Create and manage Azure Database for MySQL - Flexible Server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).-
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for MySQL - Flexible Server with Azure CLI
description: This article describes how to perform restore operations in Azure Database for MySQL through the Azure CLI. +
mysql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-server-logs-cli.md
Title: 'Monitoring - List and Download Server logs using Azure CLI'
description: This article describes how to download and list server logs using Azure CLI. +
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for MySQL - Flexible Server CLI errors
description: This topic gives guidance on troubleshooting common issues with Azure CLI when using MySQL Flexible Server. +
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
Title: Azure Database for MySQL - flexible server - major version upgrade
description: Learn how to upgrade major version for an Azure Database for MySQL - Flexible server. -+
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
-+ Last updated 02/16/2023
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
Title: 'Quickstart: Create an Azure Database for MySQL - Flexible Server - Bicep
description: In this Quickstart, learn how to create an Azure Database for MySQL - Flexible Server by using Bicep. +
mysql Tutorial Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-configure-audit.md
Title: 'Tutorial: Configure audit logs by using Azure Database for MySQL - Flexi
description: 'This tutorial shows you how to configure audit logs by using Azure Database for MySQL - Flexible Server.' +
mysql Tutorial Query Performance Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-query-performance-insights.md
Title: 'Tutorial: Query Performance Insight for Azure Database for MySQL - Flexi
description: 'This article shows you the tools to help visualize Query Performance Insight for Azure Database for MySQL - Flexible Server.' +
In some cases, a high execution count can lead to more network round trips. Roun
## Next steps - [Learn more about Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md#visualizations) and their rich visualization options. - [Learn more about slow query logs](concepts-slow-query-logs.md).--
mysql 15 Appendix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/15-appendix.md
-+ Last updated 06/21/2021
The ARM template deploys resources using standard deployment where all resources
## Default server parameters MySQL 5.5 and Azure Database for MySQL
-You can find the [full listing of default server parameters of MySQL 5.5 and Azure Database for MySQL](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1%20Appendix%20C.pdf) in our GitHub repository.
+You can find the [full listing of default server parameters of MySQL 5.5 and Azure Database for MySQL](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1%20Appendix%20C.pdf) in our GitHub repository.
mysql Quickstart Create Mysql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-bicep.md
-+ Last updated 05/02/2022
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-encryption-cli.md
Last updated 06/20/2022-+ # Data encryption for Azure Database for MySQL by using the Azure CLI
Additionally, you can use Azure Resource Manager templates to enable data encryp
* [Validating data encryption for Azure Database for MySQL](how-to-data-encryption-validation.md) * [Troubleshoot data encryption in Azure Database for MySQL](how-to-data-encryption-troubleshoot.md) * [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).-
mysql How To Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-major-version-upgrade.md
Title: Major version upgrade in Azure Database for MySQL - Single Server
description: This article describes how you can upgrade major version for Azure Database for MySQL - Single Server +
You can still continue running your MySQL v5.6 server as before. Azure **will ne
## Next steps
-Learn about [Azure Database for MySQL versioning policy](../concepts-version-policy.md).
+Learn about [Azure Database for MySQL versioning policy](../concepts-version-policy.md).
mysql How To Manage Single Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-single-server-cli.md
Title: Manage server - Azure CLI - Azure Database for MySQL
description: Learn how to manage an Azure Database for MySQL server from the Azure CLI. +
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
-+ Last updated 06/20/2022
mysql Tutorial Provision Mysql Server Using Azure Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-provision-mysql-server-using-azure-resource-manager-templates.md
Last updated 06/20/2022-+ # Tutorial: Provision an Azure Database for MySQL server using Azure Resource Manager template
network-watcher Connection Monitor Create Using Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-template.md
Last updated 02/08/2021-+ #Customer intent: I need to create a connection monitor to monitor communication between one VM and another.
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
network-watcher
Last updated 03/18/2022 -+ # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations.
network-watcher Network Watcher Connectivity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-cli.md
+ Last updated 01/07/2021
The following json is the example response from running the previous cmdlet. As
Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
-Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
+Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
Last updated 02/09/2022 -+ # Manage network security group flow logs using an Azure Resource Manager template
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
Last updated 12/09/2021 -+ # Manage network security group flow logs using the Azure CLI
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
Last updated 12/09/2021 -+ # Analyze your Virtual Machine security with Security Group View using Azure CLI
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
Last updated 09/01/2022 -+ #Customer intent: I need to enable the network security group flow logs by using an Azure Resource Manager template.
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Last updated 08/26/2022 -+ #Customer intent: I need to enable the network security group flow logs by using a Bicep file.
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Traffic analytics schema
+ Title: Traffic analytics schema and data aggregation
-description: Understand schema of Traffic Analytics to analyze Azure network security group flow logs.
+description: Learn about schema and data aggregation in Azure Network Watcher traffic analytics to analyze flow logs.
Previously updated : 03/29/2022- Last updated : 04/11/2023 + # Schema and data aggregation in Azure Network Watcher traffic analytics
-Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic Analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
+Traffic analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic analytics analyzes Azure Network Watcher flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
- Visualize network activity across your Azure subscriptions and identify hot spots.-- Identify security threats to, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VM) connecting to rogue networks.
+- Identify security threats, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VMs) connecting to rogue networks.
- Understand traffic flow patterns across Azure regions and the internet to optimize your network deployment for performance and capacity. - Pinpoint network misconfigurations leading to failed connections in your network. - Know network usage in bytes, packets, or flows.
-### Data aggregation
+## Data aggregation
-1. All flow logs at an NSG between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥ are captured at one-minute intervals in the storage account as blobs before being processed by Traffic Analytics.
-2. Default processing interval of Traffic Analytics is 60 minutes. This means that every 60 mins Traffic Analytics picks blobs from storage for aggregation. If processing interval chosen is 10 mins, Traffic Analytics will pick blobs from storage account after every 10 mins.
-3. Flows that have the same Source IP, Destination IP, Destination port, NSG name, NSG rule, Flow Direction, and Transport layer protocol (TCP or UDP) (Note: Source port is excluded for aggregation) are clubbed into a single flow by Traffic Analytics
-4. This single record is decorated (Details in the section below) and ingested in Log Analytics by Traffic Analytics.This process can take upto 1 hour max.
-5. FlowStartTime_t field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥.
-6. For any resource in TA, the flows indicated in the UI are total flows seen by the NSG, but in Log Analytics user will see only the single, reduced record. To see all the flows, use the blob_id field, which can be referenced from Storage. The total flow count for that record will match the individual flows seen in the blob.
+- All flow logs at a network security group between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t` are captured at one-minute intervals as blobs in a storage account.
+- Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.
+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol` (TCP or UDP) (Note: source port is excluded for aggregation) are clubbed into a single flow by traffic analytics.
+- This single record is decorated (details in the section below) and ingested in Log Analytics by traffic analytics. This process can take up to 1 hour max.
+- `FlowStartTime_t` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`.
+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Log Analytics user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.
-The below query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
+The following query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days.
``` AzureNetworkAnalytics_CL
AzureNetworkAnalytics_CL
| project Subnet1_s, Subnet2_s ```
-To view the blob path for the flows in the above mentioned query, use the query below:
+To view the blob path for the flows in the previous query, use the following query:
``` let TableWithBlobId =
TableWithBlobId
| project Subnet_s , BlobPath ```
-The above query constructs a URL to access the blob directly. The URL with placeholders is below:
+The previous query constructs a URL to access the blob directly. The URL with placeholders is as follows:
``` https://{saName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ```
-### Fields used in Traffic Analytics schema
+## Traffic analytics schema
> [!IMPORTANT]
-> The Traffic Analytics schema was updated August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the FlowDirection field so that queries are simpler. These changes were made:
+> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the `FlowDirection` field so that queries are simpler. These are changes in the updated schema:
>
-> - FASchemaVersion_s updated from 1 to 2.
-> - Deprecated fields: VMIP_s, Subscription_s, Region_s, NSGRules_s, Subnet_s, VM_s, NIC_s, PublicIPs_s, FlowCount_d
-> - New fields: SrcPublicIPs_s, DestPublicIPs_s, NSGRule_s
+> - `FASchemaVersion_s` updated from 1 to 2.
+> - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d`
+> - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s`
> > Deprecated fields are available until November 2022. >
-Traffic Analytics is built on top of Log Analytics, so you can run custom queries on data decorated by Traffic Analytics and set alerts on the same.
+Traffic analytics is built on top of Log Analytics, so you can run custom queries on data decorated by traffic analytics and set alerts on the same.
-Listed below are the fields in the schema and what they signify
+The following table lists the fields in the schema and what they signify.
| Field | Format | Comments |
-|: |: |: |
-| TableName | AzureNetworkAnalytics_CL | Table for Traffic Analytics data
-| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product |
-| FASchemaVersion_s | 2 | Schema version. Does not reflect NSG Flow Log version |
-| TimeProcessed_t | Date and Time in UTC | Time at which the Traffic Analytics processed the raw flow logs from the storage account |
-| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval. This is time from which flow interval is measured |
-| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval |
-| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic |
-| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record) |
-| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table |
-| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows |
-| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows |
-| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows |
-| PublicIP_s | Public IP addresses | Used for AzurePublic and ExternalPublic flows |
-| DestPort_d | Destination Port | Port at which traffic is incoming |
-| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP |
-| L7Protocol_s | Protocol Name | Derived from destination port |
-| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log |
-| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log |
-| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow |
-| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow |
-| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow |
-| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow |
-| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured |
-| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure) |
-| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to |
-| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to |
-| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure) |
-| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to |
-| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to |
-| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic |
-| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow |
-| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow |
-| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s |
-| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow |
-| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow |
-| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s |
-| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow |
-| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow |
-| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow |
-| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow |
-| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow |
-| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow |
-| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow |
-| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow |
-| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type |
-| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, this will be formatted as \<gateway name\>_\<VPN Client IP\> |
-| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here |
-| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code |
-| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region |
-| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured |
-| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured) |
-| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured) |
-| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured) |
-| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count will include the flows from various PublicIP addresses as well.
-| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This is populated only for the Version 2 of NSG flow log schema |
-| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This is populated only for the Version 2 of NSG flow log schema |
-| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This is populated only for the Version 2 of NSG flow log schema |
-| OutboundBytes_d |Represents bytes sent from the source to the destination of the flow | This is populated only for the Version 2 of NSG flow log schema |
-| CompletedFlows_d | | This is populated with non-zero value only for the Version 2 of NSG flow log schema |
-| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars |
-| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars |
-| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars |
-
-### Public IP Details Schema
-
-Traffic Analytics provides WHOIS data and geographic location for all public IPs in the customer's environment. For Malicious IP, it provides DNS domain, threat type and thread descriptions as identified by Microsoft security intelligence solutions. IP Details are published to your Log Analytics Workspace so you can create custom queries and put alerts on them. You can also access pre-populated queries from the traffic analytics dashboard.
-
-Below is the schema for public ip details:
+| -- | | -- |
+| TableName | AzureNetworkAnalytics_CL | Table for traffic analytics data. |
+| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product. |
+| FASchemaVersion_s | 2 | Schema version. Doesn't reflect NSG flow log version. |
+| TimeProcessed_t | Date and Time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. |
+| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). |
+| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval. |
+| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic. |
+| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record). |
+| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table. |
+| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
+| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. |
+| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. |
+| PublicIP_s | Public IP addresses | Used for AzurePublic and ExternalPublic flows. |
+| DestPort_d | Destination Port | Port at which traffic is incoming. |
+| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. |
+| L7Protocol_s | Protocol Name | Derived from destination port. |
+| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log. |
+| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log. |
+| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow. |
+| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow. |
+| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow. |
+| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow. |
+| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured. |
+| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
+| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to. |
+| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). |
+| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. |
+| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. |
+| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. |
+| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. |
+| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. |
+| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. |
+| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. |
+| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. |
+| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s. |
+| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. |
+| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. |
+| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. |
+| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. |
+| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. |
+| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. |
+| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. |
+| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. |
+| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. |
+| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it will be formatted as \<gateway name\>_\<VPN Client IP\>. |
+| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here. |
+| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code. |
+| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region. |
+| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. |
+| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). |
+| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). |
+| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). |
+| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. |
+| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This field is only populated for Version 2 of NSG flow log schema. |
+| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This field is only populated for Version 2 of NSG flow log schema. |
+| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This field is only populated Version 2 of NSG flow log schema. |
+| OutboundBytes_d | Represents bytes sent from the source to the destination of the flow | This field is only populated Version 2 of NSG flow log schema. |
+| CompletedFlows_d | | This field is only populated with nonzero value for Version 2 of NSG flow log schema. |
+| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. |
+
+## Public IP details schema
+
+Traffic analytics provides WHOIS data and geographic location for all public IPs in your environment. For a malicious IP, traffic analytics provides DNS domain, threat type and thread descriptions as identified by Microsoft security intelligence solutions. IP Details are published to your Log Analytics workspace so you can create custom queries and put alerts on them. You can also access prepopulated queries from the traffic analytics dashboard.
+
+The following table details public IP schema:
| Field | Format | Comments |
-|: |: |: |
-| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains Traffic Analytics IP Details data |
-| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product |
-| FASchemaVersion_s | 2 | Schema version. It does not reflect NSG Flow Log version |
-| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval. This is time from which flow interval is measured |
-| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval |
-| FlowType_s | * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow | Definition in notes below the table |
-| IP | Public IP | Public IP whose information is provided in the record |
-| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md) <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2) |
-| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP OR "Microsoft Virtual Public IP" for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md) <br> - ExternalPublic/Malicious IP: WhoIS information of the IP |
-| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described below) |
-| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP |
-| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP |
-
-List of Threat Types:
+| -- | | -- |
+| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. |
+| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. |
+| FASchemaVersion_s | 2 | Schema version. It doesn't reflect NSG flow log version. |
+| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). |
+| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval. |
+| FlowType_s | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | Definition in notes below the table. |
+| IP | Public IP | Public IP whose information is provided in the record. |
+| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). |
+| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. |
+| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). |
+| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. |
+| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. |
+
+List of threat types:
| Value | Description |
-|: |: |
-| Botnet | Indicator is detailing a botnet node/member. |
-| C2 | Indicator is detailing a Command & Control node of a botnet. |
+| -- | -- |
+| Botnet | Indicator detailing a botnet node/member. |
+| C2 | Indicator detailing a Command & Control node of a botnet. |
| CryptoMining | Traffic involving this network address / URL is an indication of CyrptoMining / Resource abuse. |
-| DarkNet | Indicator is that of a Darknet node/network. |
+| DarkNet | Indicator of a Darknet node/network. |
| DDos | Indicators relating to an active or upcoming DDoS campaign. | | MaliciousUrl | URL that is serving malware. | | Malware | Indicator describing a malicious file or files. | | Phishing | Indicators relating to a phishing campaign. |
-| Proxy | Indicator is that of a proxy service. |
+| Proxy | Indicator of a proxy service. |
| PUA | Potentially Unwanted Application. |
-| WatchList | This is the generic bucket into which indicators are placed when it cannot be determined exactly what the threat is or will require manual interpretation. This should typically not be used by partners submitting data into the system. |
--
-### Notes
-
-1. In case of AzurePublic and ExternalPublic flows, the customer owned Azure VM IP is populated in VMIP_s field, while the Public IP addresses are being populated in the PublicIPs_s field. For these two flow types, we should use VMIP_s and PublicIPs_s instead of SrcIP_s and DestIP_s fields. For AzurePublic and ExternalPublicIP addresses, we aggregate further, so that the number of records ingested to customer log analytics workspace is minimal.(This field will be deprecated soon and we should be using SrcIP_ and DestIP_s depending on whether Azure VM was the source or the destination in the flow)
-1. Details for flow types: Based on the IP addresses involved in the flow, we categorize the flows in to the following flow types:
-1. IntraVNet ΓÇô Both the IP addresses in the flow reside in the same Azure Virtual Network.
-1. InterVNet - IP addresses in the flow reside in the two different Azure Virtual Networks.
-1. S2S ΓÇô (Site To Site) One of the IP addresses belongs to Azure Virtual Network while the other IP address belongs to customer network (Site) connected to the Azure Virtual Network through VPN gateway or Express Route.
-1. P2S - (Point To Site) One of the IP addresses belongs to Azure Virtual Network while the other IP address belongs to customer network (Site) connected to the Azure Virtual Network through VPN gateway.
-1. AzurePublic - One of the IP addresses belongs to Azure Virtual Network while the other IP address belongs to Azure Internal Public IP addresses owned by Microsoft. Customer owned Public IP addresses wonΓÇÖt be part of this flow type. For instance, any customer owned VM sending traffic to an Azure Service (Storage endpoint) would be categorized under this flow type.
-1. ExternalPublic - One of the IP addresses belongs to Azure Virtual Network while the other IP address is a public IP that is not in Azure, is not reported as malicious in the ASC feeds that Traffic Analytics consumes for the processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥.
-1. MaliciousFlow - One of the IP addresses belong to Azure virtual network while the other IP address is a public IP that is not in Azure and is reported as malicious in the ASC feeds that Traffic Analytics consumes for the processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥.
-1. UnknownPrivate - One of the IP addresses belong to Azure Virtual Network while the other IP address belongs to private IP range as defined in RFC 1918 and could not be mapped by Traffic Analytics to a customer owned site or Azure Virtual Network.
-1. Unknown ΓÇô Unable to map the either of the IP addresses in the flows with the customer topology in Azure as well as on-premises (site).
-1. Some field names are appended with \_s or \_d. These do NOT signify source and destination but indicate the data types string and decimal respectively.
-
-### Next Steps
-To get answers to frequently asked questions, see [Traffic analytics FAQ](traffic-analytics-faq.yml)
-To see details about functionality, see [Traffic analytics documentation](traffic-analytics.md)
+| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or will require manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. |
+
+## Notes
+
+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to log analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).
+- Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively.
+- Based on the IP addresses involved in the flow, we categorize the flows into the following flow types:
+ - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network.
+ - `InterVNet`: IP addresses in the flow reside in two different Azure virtual networks.
+ - `S2S` (Site-To-Site): One of the IP addresses belongs to an Azure virtual network, while the other IP address belongs to customer network (Site) connected to the virtual network through VPN gateway or ExpressRoute.
+ - `P2S` (Point-To-Site): One of the IP addresses belongs to an Azure virtual network, while the other IP address belongs to customer network (Site) connected to the Azure Virtual Network through VPN gateway.
+ - `AzurePublic`: One of the IP addresses belongs to an Azure virtual network, while the other IP address is an Azure Public IP address owned by Microsoft. Customer owned Public IP addresses aren't part of this flow type. For instance, any customer owned VM sending traffic to an Azure service (Storage endpoint) would be categorized under this flow type.
+ - `ExternalPublic`: One of the IP addresses belongs to an Azure virtual network, while the other IP address is a public IP that isn't in Azure and isn't reported as malicious in the ASC feeds that traffic analytics consumes for the processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥.
+ - `MaliciousFlow`: One of the IP addresses belong to an Azure virtual network, while the other IP address is a public IP that isn't in Azure and is reported as malicious in the ASC feeds that traffic analytics consumes for the processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥.
+ - `UnknownPrivate`: One of the IP addresses belong to an Azure virtual network, while the other IP address belongs to the private IP range defined in RFC 1918 and couldn't be mapped by traffic analytics to a customer owned site or Azure virtual network.
+ - `Unknown`: Unable to map either of the IP addresses in the flow with the customer topology in Azure and on-premises (site).
+
+## Next Steps
+
+- To learn more about traffic analytics, see [Azure Network Watcher Traffic analytics](traffic-analytics.md).
+- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics frequently asked questions.
++
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
-+ Last updated 11/11/2022
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
documentationcenter: na
tags: azure-resource-manager+ na
networking Nva Accelerated Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md
This list will be updated as more regions become available. The following region
* North Central US * West Central US
+* East US
+* West US
## Supported SKUs
networking Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/powershell-samples.md
Title: Azure PowerShell Samples - Networking
description: Learn about Azure PowerShell samples for networking, including a sample for creating a virtual network for multi-tier applications. + Last updated 03/23/2023
networking Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
ms.devlang: azurecli+
This script uses the following commands to create a resource group, virtual netw
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
+Additional networking CLI script samples can be found in the [Azure Networking Overview documentation](../cli-samples.md?toc=%2fazure%2fnetworking%2ftoc.json).
networking Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/traffic-manager-cli-websites-high-availability.md
ms.devlang: azurecli+ na
This script uses the following commands to create a resource group, web app, tra
For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
-Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
+Additional App Service CLI script samples can be found in the [Azure Networking documentation](../cli-samples.md).
networking Virtual Network Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-filter-network-traffic.md
Title: Azure CLI script sample - Filter VM network traffic
description: Use an Azure CLI script to filter inbound and outbound virtual machine (VM) network traffic with front-end and back-end subnets. + Last updated 03/23/2023 - # Use an Azure CLI script to filter inbound and outbound VM network traffic
notification-hubs Create Notification Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-bicep.md
Last updated 05/24/2022 -+ # Quickstart: Create a notification hub using Bicep
notification-hubs Create Notification Hub Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-template.md
Last updated 09/21/2022
ms.lastreviewed: 05/15/2020 -+ # Quickstart: Create a notification hub using a Resource Manager template
notification-hubs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/samples-powershell.md
editor: jwargo
na+ Last updated 01/04/2019
openshift Howto Create Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-service-principal.md
Title: Creating and using a service principal with an Azure Red Hat OpenShift cl
description: In this how-to article, learn how to create and use a service principal with an Azure Red Hat OpenShift cluster using Azure CLI or the Azure portal. + Last updated 10/18/2022 topic: how-to keywords: azure, openshift, aro, red hat, azure CLI, azure portal
-#Customer intent: I need to create and use an Azure service principal to restrict permissions to my Azure Red Hat OpenShift cluster.
zone_pivot_groups: azure-red-hat-openshift-service-principal
+#Customer intent: I need to create and use an Azure service principal to restrict permissions to my Azure Red Hat OpenShift cluster.
# Create and use a service principal to deploy an Azure Red Hat OpenShift cluster
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Previously updated : 11/13/2020 Last updated : 01/13/2023 + # Azure Red Hat OpenShift
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
description: In this Quickstart, learn how to create an Azure Red Hat OpenShift
-+ Last updated 02/15/2023 keywords: azure, openshift, aro, red hat, arm, bicep
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
Last updated 03/03/2023 #Required; mm/dd/yyyy format.-+ # Create and provision a Cluster using Azure CLI
operator-nexus Howto Configure Network Fabric Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric-controller.md
Last updated 02/06/2023 #Required; mm/dd/yyyy format.-+ # Create and modify a Network Fabric Controller using Azure CLI
operator-nexus Howto Configure Network Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-network-fabric.md
Last updated 03/26/2023 #Required; mm/dd/yyyy format.-+ # Create and Provision a Network Fabric using Azure CLI
operator-nexus Howto Hybrid Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-hybrid-aks.md
This document shows how to manage an AKS-Hybrid cluster that you use for CNF wor
You need:
-1. You should have created an [AKS-Hybrid Cluster](./quickstarts-tenant-workload-deployment.md#section-k-how-to-create-aks-hybrid-cluster-for-deploying-cnf-workloads)
+1. You should have created an [AKS-Hybrid Cluster](./quickstarts-tenant-workload-deployment.md#create-aks-hybrid-clusters-for-cnf-workloads)
2. <`YourAKS-HybridClusterName`>: the name of your previously created AKS-Hybrid cluster 3. <`YourSubscription`>: your subscription name or ID where the AKS-Hybrid cluster was created 4. <`YourResourceGroupName`>: the name of the Resource group where the AKS-Hybrid cluster was created
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
description: Learn to install the needed Azure CLI extensions for Operator Nexus
+ Last updated 03/06/2023 #
operator-nexus Howto Monitor Aks H Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-aks-h-cluster.md
Last updated 01/26/2023 #Required; mm/dd/yyyy format.-+ # Monitor AKS-hybrid cluster
operator-nexus Quickstarts Tenant Workload Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md
Title: How to deploy tenant workloads
-description: Learn the steps for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads
---- Previously updated : 03/10/2023 #Required; mm/dd/yyyy format.-
+ Title: Deploy tenant workloads
+description: Learn the steps for creating VMs for VNF workloads and for creating AKS hybrid clusters for CNF workloads.
++++ Last updated : 03/10/2023+
-# How-to deploy tenant workloads
+# Deploy tenant workloads
-This how-to guide explains the steps for deploying VNF and CNF workloads. Section V (for VM-based deployments) deals with creating VMs and to deploy VNF workloads. Section K (for Kubernetes; based deployments) specifies steps for creating AKS-Hybrid clusters for deploying CNF workloads.
+This guide explains how to deploy virtual network function (VNF) and cloud-native network function (CNF) workloads. The first part deals with creating VMs and deploying VNF workloads in virtual machine (VM)-based deployments. The second part provides steps for creating Azure Kubernetes Service (AKS) hybrid clusters for deploying CNF workloads in Kubernetes-based deployments.
-You shouldn't use the examples verbatim as they don't specify all required parameters.
+Don't use the examples verbatim, because they don't specify all required parameters.
## Before you begin
-You should complete the prerequisites specified [here](./quickstarts-tenant-workload-prerequisites.md).
+Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md).
-## Section V: how to create VMs for deploying VNF workloads
+## Create VMs for deploying VNF workloads
-Step-V1: [Create Isolation Domains for VMs](#step-v1-create-isolation-domain-for-vm-workloads)
+The following sections explain the steps to create VMs for VNF workloads.
-Step-V2: [Create Networks for VM](#step-v2-create-networks-for-vm-workloads)
+### Create isolation domains for VM workloads
-Step-V3: [Create Virtual Machines](#step-v3-create-a-vm)
-
-## Deploy VMs for VNF workloads
-
-This section explains steps to create VMs for VNF workloads
-
-### Step V1: create Isolation domain for VM workloads
-
-Isolation Domains enable creation of layer 2 and layer 3 connectivity between network functions running on Operator Nexus.
-This connectivity enables inter-rack and intra-rack communication between the workloads.
-You can create as many L2 and L3 Isolation Domains as needed.
+Isolation domains enable creation of layer 2 (L2) and layer 3 (L3) connectivity between network functions running on Azure Operator Nexus. This connectivity enables inter-rack and intra-rack communication between the workloads.
+You can create as many L2 and L3 isolation domains as needed.
You should have the following information already: -- VLAN/subnet info for each of the layer 3 network(s)-- Which network(s) would need to talk to each other (remember to put VLANs/subnets that needs to
- talk to each other into the same L3 Isolation Domain)
-- BGP peering and network policies information for your L3 Isolation Domain(s)-- VLANs for all your layer 2 network(s)-- VLANs for all your trunked network(s)-- MTU values for your network.
+- VLAN and subnet info for each L3 network.
+- Which networks need to talk to each other. (Remember to put VLANs and subnets that need to talk to each other into the same L3 isolation domain.)
+- BGP peering and network policy information for your L3 isolation domains.
+- VLANs for all your L2 networks.
+- VLANs for all your trunked networks.
+- MTU values for your networks.
-#### L2 Isolation Domain
+#### L2 isolation domain
-#### L3 Isolation Domain
+#### L3 isolation domain
-### Step V2: create networks for VM workloads
+### Create networks for VM workloads
-This section describes how to create the following networks for VM Workloads:
+The following sections describe how to create these networks for VM workloads:
-- Layer 2 Network-- Layer 3 Network-- Trunked Network-- Cloud services Network
+- Layer 2 network
+- Layer 3 network
+- Trunked network
+- Cloud services network
-#### Create an L2 Network
+#### Create an L2 network
-Create an L2 Network, if necessary, for your VM. You can repeat the instructions for each L2 Network required.
+Create an L2 network, if necessary, for your VM. You can repeat the instructions for each required L2 network.
-Gather the resource ID of the L2 Isolation Domain you [created](#l2-isolation-domain) that configures the VLAN for this network.
+Gather the resource ID of the L2 isolation domain that you [created](#l2-isolation-domain) to configure the VLAN for this network.
-Example CLI command:
+Here's an example Azure CLI command:
```azurecli az networkcloud l2network create --name "<YourL2NetworkName>" \
Example CLI command:
--l2-isolation-domain-id "<YourL2IsolationDomainId>" ```
-#### Create an L3 Network
+#### Create an L3 network
-Create an L3 Network, if necessary, for your VM. Repeat the instructions for each L3 Network required.
+Create an L3 network, if necessary, for your VM. Repeat the instructions for each required L3 network.
You need: -- resource ID of the L3 Isolation Domain you [created](#l3-isolation-domain) that configures the VLAN for this network.-- The ipv4-connected-prefix must match the i-pv4-connected-prefix that is in the L3 Isolation Domain-- The ipv6-connected-prefix must match the i-pv6-connected-prefix that is in the L3 Isolation Domain-- The ip-allocation-type can be either "IPv4", "IPv6", or "DualStack" (default)-- The VLAN value must match what is in the L3 Isolation Domain-
-<! The MTU wasn't specified during l2 Isolation domain creation so what is "same"
-- The MTU of the network doesn't need to be specified here, but the network will be configured with the MTU information >
+- The `resourceID` value of the L3 isolation domain that you [created](#l3-isolation-domain) to configure the VLAN for this network.
+- The `ipv4-connected-prefix` value, which must match the `i-pv4-connected-prefix` value that's in the L3 isolation domain.
+- The `ipv6-connected-prefix` value, which must match the `i-pv6-connected-prefix` value that's in the L3 isolation domain.
+- The `ip-allocation-type` value, which can be `IPv4`, `IPv6`, or `DualStack` (default).
+- The `vlan` value, which must match what's in the L3 isolation domain.
```azurecli az networkcloud l3network create --name "<YourL3NetworkName>" \
You need:
--vlan <YourNetworkVlan> ```
-#### Create a Trunked Network
+#### Create a trunked network
-Create a Trunked Network, if necessary, for your VM. Repeat the instructions for each Trunked Network required.
+Create a trunked network, if necessary, for your VM. Repeat the instructions for each required trunked network.
-Gather the resourceId(s) of the L2 and L3 Isolation Domains you created earlier to configure the VLAN(s) for this network.
-You can include as many L2 and L3 Isolation Domains as needed.
+Gather the `resourceId` values of the L2 and L3 isolation domains that you created earlier to configure the VLANs for this network. You can include as many L2 and L3 isolation domains as needed.
```azurecli az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
You can include as many L2 and L3 Isolation Domains as needed.
--vlans <YourVlanList> ```
-### Create Cloud Services Network
+#### Create a cloud services network
-Your VM requires at least one Cloud Services Network. You need the egress endpoints you want to add to the proxy for your VM to access.
+Your VM requires at least one cloud services network. You need the egress endpoints that you want to add to the proxy for your VM to access.
```azurecli az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
Your VM requires at least one Cloud Services Network. You need the egress endpoi
--additional-egress-endpoints "[{\"category\":\"<YourCategory >\",\"endpoints\":[{\"<domainName1 >\":\"< endpoint1 >\",\"port\":<portnumber1 >}]}]" ```
-### Step V3: create a VM
+### Create a VM
+
+Azure Operator Nexus VMs are used for hosting VNFs within a telco network.
+The Azure Operator Nexus platform provides `az networkcloud virtualmachine create` to create a customized VM.
-Operator Nexus Virtual Machines (VMs) is used for hosting VNF(s) within a Telco network.
-The Nexus platform provides `az networkcloud virtualmachine create` to create a customized VM.
-For hosting a VNF on your VM, have it [Microsoft Azure Arc-enrolled](/azure/azure-arc/servers/overview),
-and provide a way to ssh to it via Azure CLI.
+To host a VNF on your VM, have it [Azure Arc enrolled](/azure/azure-arc/servers/overview), and provide a way to SSH to it via the Azure CLI.
#### Parameters -- The `subscription`, `resource group`, `location`, and `customlocation` of the Operator Nexus Cluster for deployment
- - **SUBSCRIPTION**=
- - **RESOURCE_GROUP**=
- - **LOCATION**=
- - **CUSTOM_LOCATION**=
-- A service principal configured with proper access
- - **SERVICE_PRINCIPAL_ID**=
- - **SERVICE_PRINCIPAL_SECRET**=
-- A tenant ID
- - **TENANT_ID**=
-- For a VM image hosted in a managed ACR, a generated token for access
- - **ACR_URL**=
- - **ACR_USERNAME**=
- - **ACR_TOKEN**=
- - **IMAGE_URL**=
-- SSH public/private keypair
- - **SSH_PURLIC_KEY**=
- - **SSH_PRIVATE_KEY**=
-- Azure CLI and extensions installed and available-- A customized `cloudinit userdata` file (provided)
- - **USERDATA**=
-- The resource ID of the earlier created [Cloud Service Network](#create-cloud-services-network) and [L3 Networks](#create-an-l3-network) to configure VM connectivity-
-#### 1. Update user data file
-
-Update the values listed in the _USERDATA_ file with the proper information
--- service principal ID-- service principal secret-- tenant ID-- location (Azure Region)-- custom location-
-Locate the following line in the _USERDATA_ (toward the end) and update appropriately:
+- The subscription, resource group, location, and custom location of the Azure Operator Nexus Cluster for deployment:
+ - *SUBSCRIPTION*=
+ - *RESOURCE_GROUP*=
+ - *LOCATION*=
+ - *CUSTOM_LOCATION*=
+- A service principal configured with proper access:
+ - *SERVICE_PRINCIPAL_ID*=
+ - *SERVICE_PRINCIPAL_SECRET*=
+- A tenant ID:
+ - *TENANT_ID*=
+- For a VM image hosted in a managed Azure Container Registry instance, a generated token for access:
+ - *ACR_URL*=
+ - *ACR_USERNAME*=
+ - *ACR_TOKEN*=
+ - *IMAGE_URL*=
+- An SSH public/private key pair:
+ - *SSH_PURLIC_KEY*=
+ - *SSH_PRIVATE_KEY*=
+- The Azure CLI and extensions installed and available
+- A customized `cloudinit userdata` file (provided):
+ - *USERDATA*=
+- The resource ID of the previously created [cloud services network](#create-a-cloud-services-network) and [L3 networks](#create-an-l3-network) to configure VM connectivity
+
+#### Update the user data file
+
+Update the values listed in the _USERDATA_ file with the proper information:
+
+- Service principal ID
+- Service principal secret
+- Tenant ID
+- Location (Azure region)
+- Custom location
+
+Locate the following line in the _USERDATA_ file (toward the end) and update it appropriately:
```azurecli azcmagent connect --service-principal-id _SERVICE_PRINCIPAL_ID_ --service-principal-secret _SERVICE_PRINCIPAL_SECRET_ --tenant-id _TENANT_ID_ --subscription-id _SUBSCRIPTION_ --resource-group _RESOURCE_GROUP_ --location _LOCATION_ ```
-Encode the user data
+Encode the user data:
```bash ENCODED_USERDATA=(`base64 -w0 USERDATA`) ```
-#### 2. Create the VM with the encoded data
+#### Create the VM with the encoded data
-Update the VM template with proper information:
+Update the VM template with the proper information:
-- name (_VMNAME_)-- location (_LOCATION_)-- custom location (_CUSTOM_LOCATION_)-- adminUsername (_ADMINUSER_)-- cloudServicesNetworkAttachment-- cpuCores-- memorySizeGB-- networkAttachments (set your L3 Network as default gateway)-- sshPublicKeys (_SSH_PUBLIC_KEY_)-- diskSizeGB-- userData (_ENCODED_USERDATA_)-- vmImageRepositoryCredentials (_ACR_URL_, _ACR_USERNAME_, _ACR_TOKEN_)-- vmImage (_IMAGE_URL_)
+- `name` (_VMNAME_)
+- `location` (_LOCATION_)
+- `custom location` (_CUSTOM_LOCATION_)
+- `adminUsername` (_ADMINUSER_)
+- `cloudServicesNetworkAttachment`
+- `cpuCores`
+- `memorySizeGB`
+- `networkAttachments` (set your L3 network as the default gateway)
+- `sshPublicKeys` (_SSH_PUBLIC_KEY_)
+- `diskSizeGB`
+- `userData` (_ENCODED_USERDATA_)
+- `vmImageRepositoryCredentials` (_ACR_URL_, _ACR_USERNAME_, _ACR_TOKEN_)
+- `vmImage` (_IMAGE_URL_)
-Run this command, update with your resource group and subscription info
--- subscription-- resource group-- deployment name-- layer 3 network template
+Run the following command. Update it with your info for the resource group, subscription, deployment name, and L3 network template.
```azurecli az deployment group create --resource-group _RESOURCE_GROUP_ --subscription=_SUBSCRIPTION_ --name _DEPLOYMENT_NAME_ --template-file _VM_TEMPLATE_ ```
-#### 3. SSH to the VM
+#### SSH to the VM
-It takes a few minutes for the VM to be created and then Arc connected. Should your attempt fail at first, try again after a short wait.
+It takes a few minutes for the VM to be created and then Azure Arc connected. If your attempt fails at first, try again after a short wait.
```azurecli az ssh vm -n _VMNAME_ -g _RESOURCE_GROUP_ --subscription _SUBSCRIPTION_ --private-key _SSH_PRIVATE_KEY_ --local-user _ADMINUSER_ ```
-**Capacity Note:**
-If each server has two CPU chipsets and each CPU chip has 28 cores. Then with hyper-threading enabled (default), the CPU chip supports 56 vCPUs. With 8 vCPUs in each chip reserved for infrastructure (OS, agents), the remaining 48 are available for tenant workloads.
+> [!NOTE]
+> If each server has two CPU chipsets and each CPU chip has 28 cores, then with hyperthreading enabled (default), the CPU chip supports 56 vCPUs. With 8 vCPUs in each chip reserved for infrastructure (OS and agents), the remaining 48 are available for tenant workloads.
Gather this information: -- The `resourceId` of the `cloudservicesnetwork`-- The `resourceId(s)` for each of the L2/L3/Trunked Networks-- Determine which network serves as your default gateway (can only choose 1)-- If you want to specify `networkAttachmentName` (interface name) for any of your networks-- Determine the `ipAllocationMethod` for each of your L3 Network (static/dynamic)-- The dimension of your VM
- - number of cpuCores
- - RAM (memorySizeGB)
- - DiskSize
- - emulatorThread support (if needed)
-- Boot method (UEFI/BIOS)-- vmImage reference and credentials needed to download this image-- sshKey(s)-- placement information-
-The sample command contains the information about the VM requirements covering
-compute/network/storage.
-
-Sample Command:
+- The `resourceId` value of `cloudservicesnetwork`.
+- The `resourceId` value for each of the L2, L3, and trunked networks.
+- The network that serves as your default gateway. (Choose only one.)
+- The `networkAttachmentName` value (interface name) for any of your networks.
+- The `ipAllocationMethod` value for each of your L3 networks (static and dynamic).
+- The dimensions of your VM:
+ - Number of CPU cores (`cpuCores`)
+ - RAM (`memorySizeGB`)
+ - Disk size (`DiskSize`)
+ - Emulator thread (`emulatorThread`) support, if needed
+- The boot method (UEFI or BIOS).
+- The `vmImage` reference and credentials needed to download this image.
+- SSH keys.
+- Placement information.
+
+This sample command contains the information about the VM requirements that cover compute, network, and storage:
```azurecli az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
--vm-image-repository-credentials registry-url="<YourAcrUrl>" username="<YourAcrUsername>" password="<YourAcrPassword>" \ ```
-You've created the VMs with your custom image. You're now ready to use for VNFs.
-
-## Section K: how to create AKS-Hybrid cluster for deploying CNF workloads
+You've created the VMs with your custom image. You're now ready to use them for VNFs.
-Step-K1: [Create Isolation Domains for AKS-Hybrid cluster](#step-k1-create-isolation-domain-for-aks-hybrid-cluster)
+## Create AKS hybrid clusters for CNF workloads
-Step-K2: [Create Networks for AKS-Hybrid cluster](#step-k2-create-aks-hybrid-networks)
+The following sections explain the steps to create AKS hybrid clusters for CNF workloads.
-Step-K3: [Create AKS-Hybrid cluster](#step-k3-create-an-aks-hybrid-cluster)
+> [!NOTE]
+> The following commands are examples. Don't copy or use them verbatim.
-Step-K4: [Provision Tenant workloads (CNFs)](#step-k4-provision-tenant-workloads-cnfs)
-
-**Commands shown below are examples and should not be copied or used verbatim.**
-
-## Create AKS-Hybrid clusters for CNF workloads
-
-This section explains steps to create AKS-Hybrid clusters for CNF workloads
-
-### Step K1: create Isolation Domain for AKS-Hybrid cluster
+### Create an isolation domain for the AKS hybrid cluster
You should have the following information already: -- VLAN/subnet info for each of the L3 Network(s). List of networks
- that need to talk to each other (remember to put VLAN/subnets that needs to
- talk to each other into the same L3 Isolation Domain)
-- VLAN/subnet info for your `defaultcninetwork` for AKS-Hybrid cluster-- BGP peering and network policies information for your L3 Isolation Domain(s)-- VLANs for all your L2 Network(s)-- VLANs for all your Trunked Network(s)
-<! The MTU isn't being specified and "11/15"?
-- MTU needs to be passed during creation of Isolation Domain, due to a known issue. The issue will be fixed with the 11/15 release. >
+- VLAN and subnet info for each of the L3 networks.
+- List of networks that need to talk to each other. (Remember to put VLANs and subnets that need to talk to each other into the same L3 isolation domain.)
+- VLAN and subnet info for your default CNI network (`defaultcninetwork`) for the AKS hybrid cluster.
+- BGP peering and network policy information for your L3 isolation domains.
+- VLANs for all your L2 networks.
+- VLANs for all your trunked networks.
-#### L2 Isolation Domain
+#### L2 isolation domain
-#### L3 Isolation Domain
+#### L3 isolation domain
-### Step K2: create AKS-Hybrid networks
+### Create AKS hybrid networks
-This section describes how to create networks and vNET(s) for your AKD-Hybrid Cluster.
+The following sections describe how to create networks and virtual networks for your AKS hybrid cluster.
-#### Step K2a create tenant networks for AKS-Hybrid cluster
+#### Create tenant networks for an AKS hybrid cluster
-This section describes how to create the following networks:
+The following sections describe how to create these networks:
-- Layer 2 Network-- Layer 3 Network-- Trunked Network-- Default CNI Network-- Cloud Services Network
+- Layer 2 network
+- Layer 3 network
+- Trunked network
+- Default CNI network
+- Cloud services network
-At a minimum, you need to create a "Default CNI Network" and a "Cloud Services Network".
+At a minimum, you need to create a default CNI network and a cloud services network.
-##### Create an L2 Network for AKS-Hybrid cluster
+##### Create an L2 network for an AKS hybrid cluster
-You need the resourceId of the [L2 Isolation Domain](#l2-isolation-domain-1) you created earlier that configures the VLAN for this network.
+You need the `resourceId` value of the [L2 isolation domain](#l2-isolation-domain-1) that you created earlier to configure the VLAN for this network.
-For your network, the valid values for
-`hybrid-aks-plugin-type` are `OSDevice`, `SR-IOV`, `DPDK`; the default value is `SR-IOV`.
+For your network, the valid values for `hybrid-aks-plugin-type` are `OSDevice`, `SR-IOV`, and `DPDK`. The default value is `SR-IOV`.
```azurecli az networkcloud l2network create --name "<YourL2NetworkName>" \
For your network, the valid values for
--hybrid-aks-plugin-type "<YourHaksPluginType>" ```
-##### Create an L3 Network for AKS-Hybrid cluster
+##### Create an L3 network for an AKS hybrid cluster
You need the following information: -- The `resourceId` of the [L3 Isolation Domain](#l3-isolation-domain) domain you created earlier that configures the VLAN for this network.-- The `ipv4-connected-prefix` must match the i-pv4-connected-prefix that is in the L3 Isolation Domain-- The `ipv6-connected-prefix` must match the i-pv6-connected-prefix that is in the L3 Isolation Domain-- The `ip-allocation-type` can be either "IPv4", "IPv6", or "DualStack" (default)-- The VLAN value must match what is in the L3 Isolation Domain
-<! The MTU wasn't specified during l2 Isolation domain creation so what is "same"
-- The MTU of the network doesn't need to be specified here as the network will be configured with the MTU specified during Isolation Domain creation >
+- The `resourceId` value of the [L3 isolation domain](#l3-isolation-domain) that you created earlier to configure the VLAN for this network
+- The `ipv4-connected-prefix` value, which must match the `i-pv4-connected-prefix` value that's in the L3 isolation domain
+- The `ipv6-connected-prefix` value, which must match the `i-pv6-connected-prefix` value that's in the L3 isolation domain
+- The `ip-allocation-type` value, which can be `IPv4`, `IPv6`, or `DualStack` (default)
+- The VLAN value, which must match what's in the L3 isolation domain
-You also need to configure the following information for your AKS-Hybrid cluster
+You also need to configure the following information for your AKS hybrid cluster:
-- hybrid-aks-ipam-enabled: If you want IPAM enabled for this network within your AKS-Hybrid cluster. Default: True-- hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV`
+- `hybrid-aks-ipam-enabled`, if you want IPAM enabled for this network within your AKS hybrid cluster. The default value is `True`.
+- `hybrid-aks-plugin-type`. Valid values are `OSDevice`, `SR-IOV`, and `DPDK`. The default value is `SR-IOV`.
```azurecli az networkcloud l3network create --name "<YourL3NetworkName>" \
You also need to configure the following information for your AKS-Hybrid cluster
--hybrid-aks-plugin-type "<YourHaksPluginType>" ```
-##### Create a Trunked Network for AKS-Hybrid cluster
+##### Create a trunked network for an AKS hybrid cluster
-Gather the resourceId(s) of the L2 and L3 Isolation Domains you created earlier that configured the VLAN(s) for this network. You can include as many L2 and L3 Isolation Domains as needed.
+Gather the `resourceId` values of the L2 and L3 isolation domains that you created earlier to configure the VLANs for this network. You can include as many L2 and L3 isolation domains as needed.
-You also need to configure the following information for your network
--- hybrid-aks-plugin-type: valid values are `OSDevice`, `SR-IOV`, `DPDK`. Default: `SR-IOV`
+You also need to configure the following information for your network. Valid values for `hybrid-aks-plugin-type` are `OSDevice`, `SR-IOV`, and `DPDK`. The default value is `SR-IOV`.
```azurecli az networkcloud trunkednetwork create --name "<YourTrunkedNetworkName>" \
You also need to configure the following information for your network
--hybrid-aks-plugin-type "<YourHaksPluginType>" ```
-##### Create default CNI Network for AKS-Hybrid cluster
+##### Create a default CNI network for an AKS hybrid cluster
You need the following information: -- `resourceId` of the L3 Isolation Domain you created earlier that configures the VLAN for this network.-- The ipv4-connected-prefix must match the i-pv4-connected-prefix that is in the L3 Isolation Domain-- The ipv6-connected-prefix must match the i-pv6-connected-prefix that is in the L3 Isolation Domain-- The ip-allocation-type can be either "IPv4", "IPv6", or "DualStack" (default)-- The VLAN value must match what is in the L3 Isolation Domain-- You don't need to specify the network MTU here, as the network will be configured with the same MTU information as used previously
+- The `resourceId` value of the L3 isolation domain that you created earlier to configure the VLAN for this network
+- The `ipv4-connected-prefix` value, which must match the `i-pv4-connected-prefix` value that's in the L3 isolation domain
+- The `ipv6-connected-prefix` value, which must match the `i-pv6-connected-prefix` value that's in the L3 isolation domain
+- The `ip-allocation-type` value, which can be `IPv4`, `IPv6`, or `DualStack` (default)
+- The `vlan` value, which must match what's in the L3 isolation domain
+
+You don't need to specify the network MTU here, because the network will be configured with the same MTU information that you used previously.
```azurecli az networkcloud defaultcninetwork create --name "<YourDefaultCniNetworkName>" \
You need the following information:
--service-load-balancer-prefixes '["YourLBPrefixes-1", "YourLBPrefixes-N"]' ```
-##### Create Cloud Services Network for AKS-Hybrid cluster
+##### Create a cloud services network for an AKS hybrid cluster
-You need the egress endpoints you want to add to the proxy for your VM to access.
+You need the egress endpoints that you want to add to the proxy for your VM to access.
```azurecli az networkcloud cloudservicesnetwork create --name "<YourCloudServicesNetworkName>" \
You need the egress endpoints you want to add to the proxy for your VM to access
--additional-egress-endpoints "[{\"category\":\"< YourCategory >\",\"endpoints\":[{\"< domainName1 >\":\"< endpoint1 >\",\"port\":< portnumber1 >}]}]" ```
-#### Step K2b. Create vNET for the tenant networks of AKS-Hybrid cluster
-
-For each previously created tenant network, a corresponding AKS-Hybrid vNET network needs to be created
+#### Create a virtual network for the tenant networks of an AKS hybrid cluster
-You need the Azure Resource Manager resource ID for each of the networks you created earlier. You can retrieve the Azure Resource Manager resource IDs as follows:
+For each previously created tenant network, you need to create a corresponding AKS hybrid virtual network. You can retrieve the Azure Resource Manager resource IDs as follows:
```azurecli az networkcloud cloudservicesnetwork show -g "<YourResourceGroupName>" -n "<YourCloudServicesNetworkName>" --subscription "<YourSubscription>" -o tsv --query id
az networkcloud l3network show -g "<YourResourceGroupName>" -n "<YourL3NetworkNa
az networkcloud trunkednetwork show -g "<YourResourceGroupName>" -n "<YourTrunkedNetworkName>" --subscription "<YourSubscription>" -o tsv --query id ```
-##### To create vNET for each tenant network
+To create virtual network for each tenant network, use the following command:
```azurecli az hybridaks vnet create \
az hybridaks vnet create \
--aods-vnet-id "<ARM resource ID>" ```
-### Step K3: Create an AKS-Hybrid cluster
+### Create an AKS hybrid cluster
-This section describes how to create an AKS-Hybrid cluster
+To create an AKS hybrid cluster, use the following command:
```azurecli az hybridaks create \
This section describes how to create an AKS-Hybrid cluster
--control-plane-count <count> \ --location <dc-location> \ --node-count <worker node count> \
- --node-vm-size <Operator Nexus SKU> \
+ --node-vm-size <Azure Operator Nexus SKU> \
--zones <comma separated list of availability zones> ```
-After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+After a few minutes, the command finishes and returns JSON-formatted information about the cluster.
-#### Connect to the AKS-Hybrid cluster
+#### Connect to the AKS hybrid cluster
-Now that you've created the cluster, connect to your AKS-Hybrid cluster by running the
-`az hybridaks proxy` command from your local machine. Make sure to sign-in to Azure before
-running this command. If you have multiple Azure subscriptions, select the appropriate
-subscription ID using the `az account set` command.
+Now that you've created the AKS hybrid cluster, connect to it by running the `az hybridaks proxy` command from your local machine. Be sure to sign in to Azure before you run this command. If you have multiple Azure subscriptions, select the appropriate subscription ID by using the `az account set` command.
-This command downloads the `kubeconfig` of your AKS-Hybrid cluster to your local machine
-and opens a proxy connection channel to your on-premises AKS-Hybrid cluster.
-The channel is open for as long as this command is running. Let this command run for
-as long as you want to access your cluster. If this command times out, close the CLI
-window, open a fresh one and run the command again.
+The `az hybridaks proxy` command downloads the `kubeconfig` value of your AKS hybrid cluster to your local machine and opens a proxy connection channel to your on-premises AKS hybrid cluster. The channel is open for as long as this command is running. Let this command run for as long as you want to access your cluster. If this command times out, close the CLI window, open a fresh one, and run the command again.
```azurecli az hybridaks proxy --name <aks-hybrid cluster name> --resource-group <Azure resource group> --file .\aks-hybrid-kube-config ```
-Expected output:
+Here's the expected output:
```output Proxy is listening on port 47011
Start sending kubectl requests on 'aks-workload' context using kubeconfig at .\a
Press CTRL+C to close proxy. ```
-Keep this session running and connect to your AKS-Hybrid cluster from a
-different terminal/command prompt. Verify that you can connect to your
-AKS-Hybrid cluster by running the kubectl get command. This command
+Keep this session running and connect to your AKS hybrid cluster from a
+different terminal or command prompt. Verify that you can connect to your
+AKS hybrid cluster by running the `kubectl get` command. This command
returns a list of the cluster nodes. ```azurecli kubectl get nodes -A --kubeconfig .\aks-hybrid-kube-config ```
-### Step K4: provision tenant workloads (CNFs)
+### Provision tenant workloads (CNFs)
-You can now deploy the CNFs either directly via Operator Nexus APIs or via Azure Network Function Manager.
+You can now deploy the CNFs either directly via Azure Operator Nexus APIs or via Azure Network Function Manager.
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
Title: How to deploy tenant workloads prerequisites
-description: Learn the prerequisites for creating VMs for VNF workloads and for creating AKS-Hybrid clusters for CNF workloads
---- Previously updated : 01/25/2023 #Required; mm/dd/yyyy format.-
+ Title: Prerequisites for deploying tenant workloads
+description: Learn the prerequisites for creating VMs for VNF workloads and for creating AKS hybrid clusters for CNF workloads.
++++ Last updated : 01/25/2023+
-# Tenant workloads deployment prerequisites
+# Prerequisites for deploying tenant workloads
-<! IMG ![Tenant Workload Deployment Flow](Docs//media/tenant-workload-deployment-flow.png) IMG >
+This guide explains prerequisites for creating:
-Figure: Tenant Workload Deployment Flow
+- Virtual machines (VMs) for virtual network function (VNF) workloads.
+- Azure Kubernetes Service (AKS) hybrid deployments for cloud-native network function (CNF) workloads.
-This guide explains prerequisites for creating VMs for VNF workloads and AKS-Hybrid for CNF workloads.
## Preparation
-You need to create various networks based on your workload needs. The following are some
-recommended questions to consider, but this list is by no means exhaustive. Consult with
-the appropriate support team(s) for help:
--- What type of network(s) would you need to support your workload?
- - A layer 3 network requires a VLAN and subnet assignment
- - Subnet must be large enough to support IP assignment to each of the VM
- - Note the platform reserves the first three usable IP addresses for internal use.
- For instance, to support 6 VMs, then the minimum CIDR for
- your subnet is /28 (14 usable address ΓÇô 3 reserved == 11 addresses available)
- - A layer 2 network requires only a single VLAN assignment
- - A trunked network requires the assignment of multiple VLANs
- - Determine how many networks of each type you need
- - Determine the MTU size of each of your networks (maximum is 9000)
- - Determine the BGP peering info for each network, and whether they need to talk to
- each other. You should group networks that need to talk to each other into the same L3
- isolation-domain, as each L3 isolation-domain can support multiple layer 3 networks.
- - Platform provides a proxy to allow your VM to reach other external endpoints.
- Creating a `cloudservicesnetwork` requires the endpoints to be proxied. So gather the list of endpoints.
- You can modify the list of endpoints after the network creation.
- - For AKS-Hybrid cluster, you need to create a `defaultcninetwork` to support your
- cluster CNI networking needs. You need another VLAN/subnet
- assignment for the `defaultcninetwork` similar to a layer 3 network.
-
-You need:
--- your Azure account and the subscription ID of Operator Nexus cluster deployment-- the `custom location` resource ID of your Operator Nexus cluster-
-## AKS-Hybrid availability zone
-`--zones` option in `az hybridaks create` or `az hybridaks nodepool add` can be used to distribute the AKS-Hybrid clusters across different zones for better fault tolerance and performance. When creating an AKS-Hybrid cluster, you can use the `--zones` option to schedule the cluster onto specific racks or distribute it evenly across multiple racks, improving resource utilization and fault tolerance.
-
-If you do not specify a zone when creating an AKS-Hybrid cluster through the `--zones` option, the Operator Nexus platform automatically implements a default anti-affinity rule. This anti-affinity rule aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't guarantee it.
-
-To obtain the list of available zones in the given Operator Nexus instance, you can use the following command.
+You need to create various networks based on your workload needs. The following list of considerations isn't exhaustive. Consult with the appropriate support teams for help.
+
+- Determine the types of networks that you need to support your workloads:
+ - A layer 3 (L3) network requires a VLAN and subnet assignment. The subnet must be large enough to support IP assignment to each of the VMs.
+
+ The platform reserves the first three usable IP addresses for internal use. For instance, to support six VMs, the minimum CIDR for your subnet is /28 (14 usable addresses ΓÇô 3 reserved = 11 addresses available).
+ - A layer 2 (L2) network requires only a single VLAN assignment.
+ - A trunked network requires the assignment of multiple VLANs.
+- Determine how many networks of each type you need.
+- Determine the MTU size of each of your networks (maximum is 9,000).
+- Determine the BGP peering info for each network, and whether the networks need to talk to each other. You should group networks that need to talk to each other into the same L3 isolation domain, because each L3 isolation domain can support multiple L3 networks.
+- The platform provides a proxy to allow your VM to reach other external endpoints. Creating a `cloudservicesnetwork` instance requires the endpoints to be proxied, so gather the list of endpoints.
+
+ You can modify the list of endpoints after the network creation.
+- For an AKS hybrid cluster, you need to create a `defaultcninetwork` instance to support your cluster CNI networking needs. You need another VLAN and subnet assignment for `defaultcninetwork`, similar to an L3 network.
+
+You also need:
+
+- Your Azure account and the subscription ID of the Azure Operator Nexus cluster deployment.
+- The `custom location` resource ID of your Azure Operator Nexus cluster.
+
+## Specify the AKS hybrid availability zone
+
+When you're creating an AKS hybrid cluster, you can use the `--zones` option in `az hybridaks create` or `az hybridaks nodepool add` to schedule the cluster onto specific racks or distribute it evenly across multiple racks. This technique can improve resource utilization and fault tolerance.
+
+If you don't specify a zone when you're creating an AKS hybrid cluster through the `--zones` option, the Azure Operator Nexus platform automatically implements a default anti-affinity rule. This rule aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't make guarantees.
+
+To get the list of available zones in the Azure Operator Nexus instance, you can use the following command:
```azurecli az networkcloud cluster show \
- --resource-group <Operator Nexus on-prem cluster Resource Group> \
- --name <Operator Nexus on-prem cluster name> \
+ --resource-group <Azure Operator Nexus on-premises cluster resource group> \
+ --name <Azure Operator Nexus on-premises cluster name> \
--query computeRackDefinitions[*].availabilityZone ```
-### Review Azure container registry
+### Review Azure Container Registry
[Azure Container Registry](../container-registry/container-registry-intro.md) is a managed registry service to store and manage your container images and related artifacts.
-The document provides details on how to create and maintain the Azure Container Registry operations such as [Push/Pull an image](../container-registry/container-registry-get-started-docker-cli.md?tabs=azure-cli), [Push/Pull a Helm chart](../container-registry/container-registry-helm-repos.md), etc., security and monitoring.
-For more details, also see [Azure Container Registry](../container-registry/index.yml).
-## Install CLI extensions
+This article provides details on how to create and maintain Azure Container Registry operations such as [push/pull an image](../container-registry/container-registry-get-started-docker-cli.md?tabs=azure-cli) and [push/pull a Helm chart](../container-registry/container-registry-helm-repos.md), for security and monitoring. For more information, see the [Azure Container Registry documentation](../container-registry/index.yml).
-Install latest version of the
-[necessary CLI extensions](./howto-install-cli-extensions.md).
+## Install Azure CLI extensions
-## Operator Nexus workload images
+Install the latest version of the
+[necessary Azure CLI extensions](./howto-install-cli-extensions.md).
-Make sure that each image, used for creating your workload VMs, is a
-containerized image in either `qcow2` or `raw` disk format. Upload these images to an Azure Container
-Registry. If your Azure Container Registry is password protected, you can supply this info when creating your VM.
-Refer to [Operator Nexus VM disk image build procedure](#operator-nexus-vm-disk-image-build-procedure) for an example for pulling from an anonymous Azure Container Registry.
+## Upload Azure Operator Nexus workload images
-### Operator Nexus VM disk image build procedure
+Make sure that each image that you use to create your workload VMs is a
+containerized image in either `qcow2` or `raw` disk format. Upload these images to Azure Container Registry. If your Azure Container Registry instance is password protected, you can supply this info when creating your VM.
-This build procedure is a paper-exercise example of an anonymous pull of an image from Azure Container Registry.
-It assumes that you already have an existing VM instance image in `qcow2` format and that the image can boot with cloud-init. The procedure requires a working docker build and runtime environment.
+The following build procedure is an example of how to pull an image from an anonymous Azure Container Registry instance. It assumes that you already have an existing VM instance image in `qcow2` format and that the image can boot with cloud-init. The procedure requires a working Docker build and runtime environment.
-Create a dockerfile that copies the `qcow2` image file into the container's /disk directory. Place in an expected directory with correct permissions.
-For example, a Dockerfile named `aods-vm-img-dockerfile`:
+Create a Dockerfile that copies the `qcow2` image file into the container's `/disk` directory. Place it in an expected directory with correct permissions. For example, for a Dockerfile named `aods-vm-img-dockerfile`:
```bash FROM scratch ADD --chown=107:107 your-favorite-image.qcow2 /disk/ ```
-Using the docker command, build the image and tag to a Docker registry (such as Azure Container Registry) that you can push to. Note the build can take a while depending on how large the `qcow2` file is.
-The docker command assumes the `qcow2` file is in the same directory as your Dockerfile.
+By using the `docker` command, build the image and tag to a Docker registry (such as Azure Container Registry) that you can push to. The build can take a while, depending on how large the `qcow2` file is. The `docker` command assumes that the `qcow2` file is in the same directory as your Dockerfile.
```bash docker build -f aods-vm-img-dockerfile -t devtestacr.azurecr.io/your-favorite-image:v1 .
The docker command assumes the `qcow2` file is in the same directory as your Doc
ADD --chown=107:107 your-favorite-image.qcow2 /disk/ ```
-Sign in to the Azure Container Registry if needed and push. Given the size of the docker image this push too can take a while.
+Sign in to Azure Container Registry if needed and push. Depending on the size of the Docker image, this push can also take a while.
```azurecli az acr login -n devtestacr ```
-The push refers to repository [devtestacr.azurecr.io/your-favorite-image]
+The push refers to repository `devtestacr.azurecr.io/your-favorite-image`:
```bash docker push devtestacr.azurecr.io/your-favorite-image:v1 ```
-### Create VM using image
+## Create a VM by using an image
-You can now use this image when creating Operator Nexus virtual machines.
+You can now use your image when you're creating Azure Operator Nexus virtual machines:
```azurecli az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
az networkcloud virtualmachine create --name "<YourVirtualMachineName>" \
--vm-image-repository-credentials registry-url="<YourAcrUrl>" username="<YourAcrUsername>" password="<YourAcrPassword>" \ ```
-This VM image build procedure is derived from [kubevirt](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk-workflow-example).
+This VM image build procedure is derived from [KubeVirt](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk-workflow-example).
## Miscellaneous prerequisites To deploy your workloads, you need: -- to create resource group or find a resource group to use for your workloads-- the network fabric resource ID to create isolation-domains.
+- To create resource group or find a resource group to use for your workloads.
+- The network fabric resource ID to create isolation domains.
payment-hsm Create Different Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-ip-addresses.md
ms.devlang: azurecli+ Last updated 09/12/2022
More resources:
- Find out how to [get started with Azure Payment HSM](getting-started.md) - See some common [deployment scenarios](deployment-scenarios.md) - Learn about [Certification and compliance](certification-compliance.md)-- Read the [frequently asked questions](faq.yml)
+- Read the [frequently asked questions](faq.yml)
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
ms.devlang: azurecli+ Last updated 09/12/2022
More resources:
- Find out how to [get started with Azure Payment HSM](getting-started.md) - See some common [deployment scenarios](deployment-scenarios.md) - Learn about [Certification and compliance](certification-compliance.md)-- Read the [frequently asked questions](faq.yml)
+- Read the [frequently asked questions](faq.yml)
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
Last updated 09/22/2022
tags: azure-resource-manager-+ #Customer intent: As a security admin who is new to Azure, I want to create a payment HSM using an Azure Resource Manager template.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Query Performance Insight provides intelligent query analysis for Azure Postgres
>[!div class="checklist"] > * Identify what your long running queries, and how they change over time. > * Determine the wait types affecting those queries.
-> * Details on top database queries by Calls (execution count ), by data-usage, by IOPS and by Temporary file usage (potential tuning candidates for performance improvements).
+> * Details on top database queries by Calls (execution count), by data-usage, by IOPS and by Temporary file usage (potential tuning candidates for performance improvements).
> * The ability to drill down into details of a query, to view the Query ID and history of resource utilization. > * Deeper insight into overall databases resource consumption.
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
-+ ms.tool: azure-cli Last updated 11/30/2021
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
This article provides an overview of the autovacuum feature for [Azure Database
Internal data consistency in PostgreSQL is based on the Multi-Version Concurrency Control (MVCC) mechanism, which allows the database engine to maintain multiple versions of a row and provides greater concurrency with minimal blocking between the different processes.
-PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it is not removed physically. Instead, the row is marked as ΓÇ£deadΓÇ¥. Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
+PostgreSQL databases need appropriate maintenance. For example, when a row is deleted, it isn't removed physically. Instead, the row is marked as ΓÇ£deadΓÇ¥. Similarly for updates, the row is marked as "dead" and a new version of the row is inserted. These operations leave behind dead records, called dead tuples, even after all the transactions that might see those versions finish. Unless cleaned up, dead tuples remain, consuming disk space and bloating tables and indexes which result in slow query performance.
PostgreSQL uses a process called autovacuum to automatically clean up dead tuples.
PostgreSQL uses a process called autovacuum to automatically clean up dead tuple
Autovacuum reads pages looking for dead tuples, and if none are found, autovacuum discard the page. When autovacuum finds dead tuples, it removes them. The cost is based on: -- `vacuum_cost_page_hit`: Cost of reading a page that is already in shared buffers and does not need a disk read. The default value is set to 1.-- `vacuum_cost_page_miss`: Cost of fetching a page that is not in shared buffers. The default value is set to 10.
+- `vacuum_cost_page_hit`: Cost of reading a page that is already in shared buffers and doesn't need a disk read. The default value is set to 1.
+- `vacuum_cost_page_miss`: Cost of fetching a page that isn't in shared buffers. The default value is set to 10.
- `vacuum_cost_page_dirty`: Cost of writing to a page when dead tuples are found in it. The default value is set to 20. The amount of work autovacuum does depends on two parameters: -- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go and once the cleanup process is done, the amount of time autovacuum is asleep. -- `autovacuum_vacuum_cost_delay` number of milliseconds.
+- `autovacuum_vacuum_cost_limit` is the amount of work autovacuum does in one go.
+- `autovacuum_vacuum_cost_delay` number of milliseconds that autovacuum is asleep after it has reached the cost limit specified by the `autovacuum_vacuum_cost_limit` parameter.
In Postgres versions 9.6, 10 and 11 the default for `autovacuum_vacuum_cost_limit` is 200 and `autovacuum_vacuum_cost_delay` is 20 milliseconds.
Use the following query to list the tables in a database and identify the tables
``` > [!NOTE]
-> The query does not take into consideration that autovacuum can be configured on a per-table basis using the "alter table" DDL command. 
+> The query doesn't take into consideration that autovacuum can be configured on a per-table basis using the "alter table" DDL command. 
## Common autovacuum problems
By default, `autovacuum_vacuum_cost_limit` is set to –1, meaning autovacuum
If `autovacuum_vacuum_cost_limit` is set to `-1` then autovacuum uses the `vacuum_cost_limit` parameter, but if `autovacuum_vacuum_cost_limit` itself is set to greater than `-1` then `autovacuum_vacuum_cost_limit` parameter is considered.
-In case the autovacuum is not keeping up, the following parameters may be changed:
+In case the autovacuum isn't keeping up, the following parameters may be changed:
|Parameter |Description | |||
In case the autovacuum is not keeping up, the following parameters may be change
|`autovacuum_vacuum_cost_delay` | **Postgres Versions 9.6,10,11** - Default: `20 ms`. The parameter may be decreased to `2-10 ms`. </br> **Postgres Versions 12 and above** - Default: `2 ms`. | > [!NOTE]
-> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker does not exceed the value of the `autovacuum_vacuum_cost_limit` parameter
+> The `autovacuum_vacuum_cost_limit` value is distributed proportionally among the running autovacuum workers, so that if there is more than one, the sum of the limits for each worker doesn't exceed the value of the `autovacuum_vacuum_cost_limit` parameter
### Autovacuum constantly running
Autovacuum tries to start a worker on each database everyΓÇ»`autovacuum_naptime`
For example, if a server has 60 databases and `autovacuum_naptime` is set to 60 seconds, then the autovacuum worker starts every second [autovacuum_naptime/Number of DBs].
-It is a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
+It's a good idea to increase `autovacuum_naptime` if there are more databases in a cluster. At the same time, the autovacuum process can be made more aggressive by increasing the `autovacuum_cost_limit` and decreasing the `autovacuum_cost_delay` parameters and increasing the `autovacuum_max_workers` from the default of 3 to 4 or 5.
### Out of memory errors
-Overly aggressive `maintenance_work_mem` values could periodically cause out-of-memory errors in the system. It is important to understand available RAM on the server before any change to the `maintenance_work_mem` parameter is made.
+Overly aggressive `maintenance_work_mem` values could periodically cause out-of-memory errors in the system. It's important to understand available RAM on the server before any change to the `maintenance_work_mem` parameter is made.
### Autovacuum is too disruptive
Evaluate the parameters `autovacuum_vacuum_cost_delay`, `autovacuum_vacuum_cost_
If autovacuum is too disruptive, consider the following: - IncreaseΓÇ»`autovacuum_vacuum_cost_delay` and reduce `autovacuum_vacuum_cost_limit` if set higher than the default of 200. -- Reduce the number of `autovacuum_max_workers` if it is set higher than the default of 3.ΓÇ»
+- Reduce the number of `autovacuum_max_workers` if it's set higher than the default of 3.ΓÇ»
#### Too many autovacuum workersΓÇ»
-Increasing the number of autovacuum workers will not necessarily increase the speed of vacuum. Having a high number of autovacuum workers is not recommended.
+Increasing the number of autovacuum workers will not necessarily increase the speed of vacuum. Having a high number of autovacuum workers isn't recommended.
Increasing the number of autovacuum workers will result in more memory consumption, and depending on the value of `maintenance_work_mem` , could cause performance degradation.
However, if we have changed table level `autovacuum_vacuum_cost_delay` or 
When a database runs into transaction ID wraparound protection, an error message like the following can be observed: ```
-Database is not accepting commands to avoid wraparound data loss in database ΓÇÿxxΓÇÖ
+Database isn't accepting commands to avoid wraparound data loss in database ΓÇÿxxΓÇÖ
Stop the postmaster and vacuum that database in single-user mode. ```
The wraparound problem occurs when the database is either not vacuumed or there
#### Heavy workload
-The workload could cause too many dead tuples in a brief period that makes it difficult for autovacuum to catch up. The dead tuples in the system add up over a period leading to degradation of query performance and leading to wraparound situation. One reason for this situation to arise might be because autovacuum parameters aren't adequately set and it is not keeping up with a busy server.
+The workload could cause too many dead tuples in a brief period that makes it difficult for autovacuum to catch up. The dead tuples in the system add up over a period leading to degradation of query performance and leading to wraparound situation. One reason for this situation to arise might be because autovacuum parameters aren't adequately set and it isn't keeping up with a busy server.
#### Long-running transactions
When the database runs into transaction ID wraparound protection, check for any
### Table-specific requirementsΓÇ»
-Autovacuum parameters may be set for individual tables. It is especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This will prevent autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
+Autovacuum parameters may be set for individual tables. It's especially important for small and big tables. For example, for a small table that contains only 100 rows, autovacuum triggers VACUUM operation when 70 rows change (as calculated previously). If this table is frequently updated, you might see hundreds of autovacuum operations a day. This will prevent autovacuum from maintaining other tables on which the percentage of changes aren't as big. Alternatively, a table containing a billion rows needs to change 200 million rows to trigger autovacuum operations. Setting autovacuum parameters appropriately prevents such scenarios.
To set autovacuum setting per table, change the server parameters as the following examples:
To set autovacuum setting per table, change the server parameters as the follo
In versions of PostgreSQL prior to 13, autovacuum will not run on tables with an insert-only workload, because if there are no updates or deletes, there are no dead tuples and no free space that needs to be reclaimed. However, autoanalyze will run for insert-only workloads since there is new data. The disadvantages of this are: -- The visibility map of the tables is not updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time.
+- The visibility map of the tables isn't updated, and thus query performance, especially where there are Index Only Scans, starts to suffer over time.
- The database can run into transaction ID wraparound protection. - Hint bits will not be set.
Autovacuum will run on tables with an insert-only workload. Two new server p
- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-cpu-utilization.md). - Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).-- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
+- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
Last updated 11/04/2022 +
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL - Flexible Serv
description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure CLI. +
az postgres flexible-server delete --resource-group myresourcegroup --name mydem
## Next steps - [Understand backup and restore concepts](concepts-backup-restore.md)-- [Tune and monitor the server](concepts-monitoring.md)
+- [Tune and monitor the server](concepts-monitoring.md)
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
+ Last updated 11/30/2021
Refer to the Azure CLI [reference documentation](/cli/azure/postgres/flexible-se
## Next steps - Learn more about [networking in Azure Database for PostgreSQL - Flexible Server](./concepts-networking.md). - [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).-- Understand more about [Azure Database for PostgreSQL - Flexible Server virtual network](./concepts-networking.md#private-access-vnet-integration).
+- Understand more about [Azure Database for PostgreSQL - Flexible Server virtual network](./concepts-networking.md#private-access-vnet-integration).
postgresql How To Perform Major Version Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-major-version-upgrade-cli.md
Title: Major Version Upgrade of a flexible server - Azure CLI
description: This article describes how to perform major version upgrade in Azure Database for PostgreSQL through Azure CLI. +
az postgres server upgrade -g myresource-group -n myservername -v mypgversion
## Next steps * Learn about [Major Version Upgrade](concepts-major-version-upgrade.md) * Learn about [backup & recovery](concepts-backup-restore.md) -
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
Title: Restart - Azure portal - Azure Database for PostgreSQL Flexible Server
description: This article describes how to restart operations in Azure Database for PostgreSQL through the Azure CLI. +
az postgres flexible-server restart
## Next steps - Learn more about [stopping and starting Azure Database for PostgreSQL Flexible Server](./how-to-stop-start-server-cli.md)--
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for PostgreSQL - Flexible Server with Azure CLI
description: This article describes how to perform restore operations in Azure Database for PostgreSQL through the Azure CLI. +
After the restore is completed, you should perform the following tasks to get yo
## Next steps * Learn about [business continuity](concepts-business-continuity.md) * Learn about [backup & recovery](concepts-backup-restore.md) -
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
Title: Stop/start - Azure CLI - Azure Database for PostgreSQL Flexible Server
description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure CLI. +
az postgres flexible-server start
## Next steps - Learn more about [restarting Azure Database for PostgreSQL Flexible Server](./how-to-restart-server-cli.md)--
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors
description: This topic gives guidance on troubleshooting common issues with Azure CLI when using PostgreSQL Flexible Server. +
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
-+ Last updated 05/12/2022
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
description: In this Quickstart, learn how to create an Azure Database for Postg
+ Last updated 09/21/2022
postgresql How To Migrate From Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-from-oracle.md
For more information about this migration scenario, see the following resources.
| -- | | | [Oracle to Azure PostgreSQL migration cookbook](https://www.microsoft.com/en-us/download/details.aspx?id=103473) | This document helps architects, consultants, database administrators, and related roles quickly migrate workloads from Oracle to Azure Database for PostgreSQL by using ora2pg. | | [Oracle to Azure PostgreSQL migration workarounds](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Oracle%20to%20Azure%20Database%20for%20PostgreSQL%20Migration%20Workarounds.pdf) | This document helps architects, consultants, database administrators, and related roles quickly fix or work around issues while migrating workloads from Oracle to Azure Database for PostgreSQL. |
-| [Steps to install ora2pg on Windows or Linux](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/Steps%20to%20Install%20ora2pg%20on%20Windows%20and%20Linux.pdf) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
+| [Steps to install ora2pg on Windows or Linux](https://www.microsoft.com/download/confirmation.aspx?id=105121) | This document provides a quick installation guide for migrating schema and data from Oracle to Azure Database for PostgreSQL by using ora2pg on Windows or Linux. For more information, see the [ora2pg documentation](http://ora2pg.darold.net/documentation.html). |
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to the Microsoft Azure data platform.
postgresql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-cli.md
Last updated 06/24/2022-+ # Data encryption for Azure Database for PostgreSQL Single server by using the Azure CLI
postgresql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-data-encryption-portal.md
Title: Data encryption - Azure portal - for Azure Database for PostgreSQL - Sing
description: Learn how to set up and manage data encryption for your Azure Database for PostgreSQL Single server by using the Azure portal. + Last updated 06/24/2022
-
# Data encryption for Azure Database for PostgreSQL Single server by using the Azure portal
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL
description: Learn how to manage an Azure Database for PostgreSQL server from the Azure CLI. +
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
-+ Last updated 06/24/2022
postgresql Quickstart Create Postgresql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-bicep.md
-+ Last updated 06/24/2022
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure
**Q. Can I continue running my Azure Database for PostgreSQL - Single Server beyond the sunset date of March 28, 2025?**
-**A.** We plan to support Single Server at the sunset date of March 28, 2025, and we strongly advise that you start planning your migration as soon as possible. We plan to end support for single server deployments at the sunset data of March 28, 2025.
+**A.** We plan to support Single Server until the sunset date of March 28, 2025, and we strongly advise that you start planning your migration as soon as possible. We plan to end support for single server deployments at the sunset data of March 28, 2025.
**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
You can contact your account teams if downtime requirements aren't met by the Of
> [!NOTE] > Support for online migration is coming soon.
-
**Q. Will there be future updates to Single Server to support the latest PostgreSQL versions?** **A.** We recommend you migrate to Flexible Server if you must run on the latest PostgreSQL engine versions. We continue to deploy minor versions released by the community for Postgres version 11 until it's retired by the community in Nov'2023. > [!NOTE] > We're extending support for Postgres version 11 past the community retirement date and will support PostgreSQL version 11 on both [Single Server](https://azure.microsoft.com/updates/singlepg11-retirement/) and [Flexible Server](https://azure.microsoft.com/updates/flexpg11-retirement/) to ease this transition. Consider migrating to Flexible Server to use the benefits of the latest Postgres engine versions.
-
**Q. How does the Flexible Server 99.99% availability SLA differ from Single Server?** **A.** Flexible Server zone-redundant deployment provides 99.99% availability with zonal-level resiliency, and Single Server delivers 99.99% availability but without zonal resiliency. Flexible Server High Availability (HA) architecture deploys a hot standby server with redundant compute and storage (with each site's data stored in 3x copies). A Single Server HA architecture doesn't have a passive hot standby to help recover from zonal failures. Flexible Server HA architecture reduces downtime during unplanned outages and planned maintenance.
You can contact your account teams if downtime requirements aren't met by the Of
- West India - Sweden North
-We recommend migrating to CN3/CE3, Central India, and Sweden South regions.
-
+We recommend migrating to CN3/CE3, Central India, Sweden Central and Sweden South regions.
**Q. I have a private link configured for my single server, and this feature is not currently supported in Flexible Server. How do I migrate?** **A.** Flexible Server support for private-link is our highest priority and on the roadmap. This feature is planned to launch in Q4 2023. Another option is to consider migrating to VNET injected flexible server.
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Title: Collect information for a site description: Learn about the information you'll need to create a site in an existing private mobile network.--++ Last updated 02/07/2022
You can use this information to create a site in an existing private mobile netw
- If you want to give Azure role-based access control (Azure RBAC) to storage accounts, you must have the relevant permissions on your account. - Make a note of the resource group that contains your private mobile network that was collected in [Collect the required information to deploy a private mobile network](collect-required-information-for-private-mobile-network.md). We recommend that the mobile network site resource you create in this procedure belongs to the same resource group.
+## Choose a service plan
+
+Choose the service plan that will best fit your requirements and verify pricing and charges. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/).
+ ## Collect mobile network site resource values Collect all the values in the following table for the mobile network site resource that will represent your site.
Collect all the values in the following table for the mobile network site resour
|The packet core in which to create the mobile network site resource. |**Instance details: Packet core name**| |The [region code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.| |The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.|
- |The billing plan for the site that you are creating. The available plans have the following throughput, activated SIMs and radio access network (RAN) allowances:</br></br>G0 - 100 Mbps per site, 20 activated SIMs per network and 2 RAN connections. </br> G1 - 1 Gbps per site, 100 activated SIMs per network and 5 RAN connections. </br> G2 - 2 Gbps per site, 200 activated SIMs per network and 10 RAN connections. </br> G5 - 5 Gbps per site, 500 activated SIMs per network and unlimited RAN connections. </br> G10 - 10 Gbps per site, 1000 activated SIMs per network and unlimited RAN connections.|**Instance details: Service plan**|
+ |The service plan for the site that you are creating. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/). |**Instance details: Service plan**|
## Collect packet core configuration values
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
Last updated 03/21/2022-+ # Configure a service and SIM policy using an ARM template
Two Azure resources are defined in the template.
You can now assign the SIM policy to your SIMs to bring them into service. -- [Assign a SIM policy to a SIM](manage-existing-sims.md#assign-sim-policies)
+- [Assign a SIM policy to a SIM](manage-existing-sims.md#assign-sim-policies)
private-5g-core Create Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-overview-dashboard.md
Last updated 03/20/2022-+ # Create an overview Log Analytics dashboard using an ARM template
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Last updated 03/16/2022-+ # Create a site using an ARM template
Four Azure resources are defined in the template.
If you decided to set up Azure AD for local monitoring access, follow the steps in [Modify the local access configuration in a site](modify-local-access-configuration.md) and [Enable Azure Active Directory (Azure AD) for local monitoring tools](enable-azure-active-directory.md).
-If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows. See [Policy control](policy-control.md) to learn more about designing the policy control configuration for your private mobile network.
+If you haven't already done so, you should now design the policy control configuration for your private mobile network. This allows you to customize how your packet core instances apply quality of service (QoS) characteristics to traffic. You can also block or limit certain flows. See [Policy control](policy-control.md) to learn more about designing the policy control configuration for your private mobile network.
private-5g-core Create Slice Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-slice-arm-template.md
Last updated 09/30/2022-+ # Create a slice using an ARM template
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
-+ Last updated 03/23/2022
If you do not want to keep your deployment, [delete the resource group](../azure
If you have kept your deployment, you can either begin designing policy control to determine how your private mobile network will handle traffic, or you can add more sites to your private mobile network. - [Learn more about designing the policy control configuration for your private mobile network](policy-control.md)-- [Collect the required information for a site](collect-required-information-for-a-site.md)
+- [Collect the required information for a site](collect-required-information-for-a-site.md)
private-5g-core Modify Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-service-plan.md
Title: Modify a service plan description: In this how-to guide, you'll learn how to modify a service plan using the Azure portal. --++ Last updated 10/13/2022
The *service plan* determines an allowance for the throughput and the number of
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- Verify pricing and charges associated with the service plan to which you want to move. See the [Azure Private 5G Core Pricing page](https://azure.microsoft.com/pricing/details/private-5g-core/) for pricing information.
-## Choose the new service plan
+## Choose a new service plan
-Use the following table to choose the new service plan that will best fit your requirements.
-
-| Service Plan | Licensed Throughput | Licensed Activated SIMs | Licensed RANs |
-|||||
-| G0 | 100 Mbps | 20 | 2 |
-| G1 | 1 Gbps | 100 | 5 |
-| G2 | 2 Gbps | 200 | 10 |
-| G5 | 5 Gbps | 500 | Unlimited |
-| G10 | 10 Gbps | 1000 | Unlimited |
+Choose the service plan that will best fit your requirements and verify pricing and charges. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/).
## View the current service plan
To modify your service plan:
:::image type="content" source="media/modify-service-plan/service-plan.png" alt-text="Screenshot of the Azure portal showing a packet core control plane resource. The Service Plan field is highlighted.":::
-3. In **Service Plan** on the right, select the new service plan you collected in [Choose the new service plan](#choose-the-new-service-plan). Save your change with **Select**.
+3. In **Service Plan** on the right, select the new service plan you identified in [Choose a new service plan](#choose-a-new-service-plan). Save your change with **Select**.
:::image type="content" source="media/modify-service-plan/service-plan-selection-tab.png" alt-text="Screenshot of the Azure portal showing the Service Plan screen.":::
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
Last updated 03/21/2022-+ # Provision new SIMs for Azure Private 5G Core - ARM template
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
Last updated 05/16/2022-+ # Upgrade the packet core instance in a site - ARM template
If any of the configuration you set while your packet core instance was running
You've finished upgrading your packet core instance. - If your deployment contains multiple sites, upgrade the packet core instance in another site.-- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
+- Use [Log Analytics](monitor-private-5g-core-with-log-analytics.md) or the [packet core dashboards](packet-core-dashboards.md) to monitor your deployment.
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
Last updated 05/02/2022 -+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint using Bicep.
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-template.md
Last updated 07/18/2022 -+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using an ARM template.
private-link Create Private Link Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-bicep.md
Last updated 04/29/2022 -+ # Quickstart: Create a private link service using Bicep
private-link Create Private Link Service Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-template.md
Last updated 03/30/2023 -+ # Quickstart: Create a private link service using an ARM template
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
Last updated 11/22/2022-+ # Key concepts for Azure public MEC
public-multi-access-edge-compute-mec Quickstart Create Vm Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-azure-resource-manager-template.md
Last updated 11/22/2022-+ # Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
This article discusses currently supported data sources, file types, and scannin
The table below shows the supported capabilities for each data source. Select the data source, or the feature, to learn more.
-|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Access Policy** | **Data Sharing** |
-||||||||
-| Azure |[Multiple sources](register-scan-azure-multiple-sources.md)| [Yes](register-scan-azure-multiple-sources.md#register) | [Yes](register-scan-azure-multiple-sources.md#scan) | No |[Yes](register-scan-azure-multiple-sources.md#access-policy) | No |
-||[Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](register-scan-azure-blob-storage-source.md#access-policy) (Preview) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
-|| [Azure Cosmos DB (API for NoSQL)](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|No| No|
-|| [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | No | No|
-|| [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No | No|
-|| [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | No | No|
-|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](register-scan-adls-gen2.md#access-policy) (Preview) | [Yes](register-scan-adls-gen2.md#data-sharing) |
-|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No | No|
-|| [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No | No |
-|| [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | No |
-|| [Azure Databricks](register-scan-azure-databricks.md) | [Yes](register-scan-azure-databricks.md#register) | No | [Yes](register-scan-azure-databricks.md#lineage) | No | No |
-|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | No |
-|| [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | No |
-|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register-the-data-source) |[Yes](register-scan-azure-sql-database.md#scope-and-run-the-scan)| [Yes (Preview)](register-scan-azure-sql-database.md#extract-lineage-preview) | [Yes](register-scan-azure-sql-database.md#set-up-access-policies) | No |
-|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | No | No |
-|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No| No |
-|Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | No |
-|| [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No| No |
-|| [Db2](register-scan-db2.md) | [Yes](register-scan-db2.md#register) | No | [Yes](register-scan-db2.md#lineage) | No | No |
-|| [Google BigQuery](register-scan-google-bigquery-source.md)| [Yes](register-scan-google-bigquery-source.md#register)| No | [Yes](register-scan-google-bigquery-source.md#lineage)| No| No |
-|| [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No| No |
-|| [MongoDB](register-scan-mongodb.md) | [Yes](register-scan-mongodb.md#register) | No | No | No | No |
-|| [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#lineage) | No | No |
-|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| [Yes](register-scan-oracle-source.md#scan) | [Yes*](register-scan-oracle-source.md#lineage) | No| No |
-|| [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No | No |
-|| [SAP Business Warehouse](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No | No |
-|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No |
-|| [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | [Yes](register-scan-snowflake.md#scan) | [Yes](register-scan-snowflake.md#lineage) | No | No |
-|| [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No| No |
-|| [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |[Yes](register-scan-azure-arc-enabled-sql-server.md#access-policy) | No |
-|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| [Yes](register-scan-teradata-source.md#scan)| [Yes*](register-scan-teradata-source.md#lineage) | No| No |
-|File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | No| No |
-||[HDFS](register-scan-hdfs.md)|[Yes](register-scan-hdfs.md)| [Yes](register-scan-hdfs.md)| No | No| No |
-|Services and apps| [Erwin](register-scan-erwin-source.md)| [Yes](register-scan-erwin-source.md#register)| No | [Yes](register-scan-erwin-source.md#lineage)| No| No |
-|| [Looker](register-scan-looker-source.md)| [Yes](register-scan-looker-source.md#register)| No | [Yes](register-scan-looker-source.md#lineage)| No| No |
-|| [Power BI](register-scan-power-bi-tenant.md)| [Yes](register-scan-power-bi-tenant.md)| No | [Yes](how-to-lineage-powerbi.md)| No| No |
-|| [Salesforce](register-scan-salesforce.md) | [Yes](register-scan-salesforce.md#register) | No | No | No | No |
-|| [SAP ECC](register-scan-sapecc-source.md)| [Yes](register-scan-sapecc-source.md#register) | No | [Yes*](register-scan-sapecc-source.md#lineage) | No| No |
-|| [SAP S/4HANA](register-scan-saps4hana-source.md) | [Yes](register-scan-saps4hana-source.md#register)| No | [Yes*](register-scan-saps4hana-source.md#lineage) | No| No |
+|**Category**| **Data Store** |**Technical metadata** |**Classification** |**Lineage** | **Labeling** |**Access Policy** | **Data Sharing** |
+|||||||||
+| Azure |[Multiple sources](register-scan-azure-multiple-sources.md)| [Yes](register-scan-azure-multiple-sources.md#register) | [Yes](register-scan-azure-multiple-sources.md#scan) | No |[Source Dependent](create-sensitivity-label.md)|[Yes](register-scan-azure-multiple-sources.md#access-policy) | No |
+||[Azure Blob Storage](register-scan-azure-blob-storage-source.md)| [Yes](register-scan-azure-blob-storage-source.md#register) | [Yes](register-scan-azure-blob-storage-source.md#scan)| Limited* | [Yes](create-sensitivity-label.md)|[Yes](register-scan-azure-blob-storage-source.md#access-policy) (Preview) | [Yes](register-scan-azure-blob-storage-source.md#data-sharing)|
+|| [Azure Cosmos DB (API for NoSQL)](register-scan-azure-cosmos-database.md)| [Yes](register-scan-azure-cosmos-database.md#register) | [Yes](register-scan-azure-cosmos-database.md#scan)|No*|[Yes](create-sensitivity-label.md)|No| No|
+|| [Azure Data Explorer](register-scan-azure-data-explorer.md)| [Yes](register-scan-azure-data-explorer.md#register) | [Yes](register-scan-azure-data-explorer.md#scan)| No* | [Yes](create-sensitivity-label.md)|No | No|
+|| [Azure Data Factory](how-to-link-azure-data-factory.md) | [Yes](how-to-link-azure-data-factory.md) | No | [Yes](how-to-link-azure-data-factory.md) | No | No | No|
+|| [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md)| [Yes](register-scan-adls-gen1.md#register) | [Yes](register-scan-adls-gen1.md#scan)| Limited* | [Yes](create-sensitivity-label.md)|No | No|
+|| [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md)| [Yes](register-scan-adls-gen2.md#register) | [Yes](register-scan-adls-gen2.md#scan)| Limited* | [Yes](create-sensitivity-label.md)|[Yes](register-scan-adls-gen2.md#access-policy) (Preview) | [Yes](register-scan-adls-gen2.md#data-sharing) |
+|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No| No | No|
+|| [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | [Yes](create-sensitivity-label.md)| No | No |
+|| [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | [Yes](create-sensitivity-label.md)| No | No |
+|| [Azure Databricks](register-scan-azure-databricks.md) | [Yes](register-scan-azure-databricks.md#register) | No | [Yes](register-scan-azure-databricks.md#lineage) | No | No | No |
+|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | No | No |
+|| [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | [Yes](create-sensitivity-label.md)|No | No |
+|| [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register-the-data-source) |[Yes](register-scan-azure-sql-database.md#scope-and-run-the-scan)| [Yes (Preview)](register-scan-azure-sql-database.md#extract-lineage-preview) | [Yes](create-sensitivity-label.md)| [Yes](register-scan-azure-sql-database.md#set-up-access-policies) | No |
+|| [Azure SQL Managed Instance](register-scan-azure-sql-managed-instance.md)| [Yes](register-scan-azure-sql-managed-instance.md#scan) | [Yes](register-scan-azure-sql-managed-instance.md#scan) | No* | [Yes](create-sensitivity-label.md)| No | No |
+|| [Azure Synapse Analytics (Workspace)](register-scan-synapse-workspace.md)| [Yes](register-scan-synapse-workspace.md#register) | [Yes](register-scan-synapse-workspace.md#scan)| [Yes - Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| [Yes](create-sensitivity-label.md)|No| No |
+|Database| [Amazon RDS](register-scan-amazon-rds.md) | [Yes](register-scan-amazon-rds.md#register-an-amazon-rds-data-source) | [Yes](register-scan-amazon-rds.md#scan-an-amazon-rds-database) | No | No | No | No|
+|| [Cassandra](register-scan-cassandra-source.md)|[Yes](register-scan-cassandra-source.md#register) | No | [Yes](register-scan-cassandra-source.md#lineage)| No| No |No|
+|| [Db2](register-scan-db2.md) | [Yes](register-scan-db2.md#register) | No | [Yes](register-scan-db2.md#lineage) | No | No | No|
+|| [Google BigQuery](register-scan-google-bigquery-source.md)| [Yes](register-scan-google-bigquery-source.md#register)| No | [Yes](register-scan-google-bigquery-source.md#lineage)| No| No | No|
+|| [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No| No |No|
+|| [MongoDB](register-scan-mongodb.md) | [Yes](register-scan-mongodb.md#register) | No | No | No | No | No|
+|| [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#lineage) | No | No | No|
+|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| [Yes](register-scan-oracle-source.md#scan) | [Yes*](register-scan-oracle-source.md#lineage) | No| No | No|
+|| [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No | No | No|
+|| [SAP Business Warehouse](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No | No | No|
+|| [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No | No|
+|| [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | [Yes](register-scan-snowflake.md#scan) | [Yes](register-scan-snowflake.md#lineage) | No | No | No|
+|| [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | [Yes](create-sensitivity-label.md)|No| No |
+|| [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |No|[Yes](register-scan-azure-arc-enabled-sql-server.md#access-policy) | No |
+|| [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| [Yes](register-scan-teradata-source.md#scan)| [Yes*](register-scan-teradata-source.md#lineage) | No|No| No |
+|File|[Amazon S3](register-scan-amazon-s3.md)|[Yes](register-scan-amazon-s3.md)| [Yes](register-scan-amazon-s3.md)| Limited* | [Yes](create-sensitivity-label.md)|No| No |
+||[HDFS](register-scan-hdfs.md)|[Yes](register-scan-hdfs.md)| [Yes](register-scan-hdfs.md)| No | No| No |No|
+|Services and apps| [Erwin](register-scan-erwin-source.md)| [Yes](register-scan-erwin-source.md#register)| No | [Yes](register-scan-erwin-source.md#lineage)| No| No |No|
+|| [Looker](register-scan-looker-source.md)| [Yes](register-scan-looker-source.md#register)| No | [Yes](register-scan-looker-source.md#lineage)| No| No |No|
+|| [Power BI](register-scan-power-bi-tenant.md)| [Yes](register-scan-power-bi-tenant.md)| No | [Yes](how-to-lineage-powerbi.md)| No| No |No|
+|| [Salesforce](register-scan-salesforce.md) | [Yes](register-scan-salesforce.md#register) | No | No | No | No |No|
+|| [SAP ECC](register-scan-sapecc-source.md)| [Yes](register-scan-sapecc-source.md#register) | No | [Yes*](register-scan-sapecc-source.md#lineage) | No| No | No|
+|| [SAP S/4HANA](register-scan-saps4hana-source.md) | [Yes](register-scan-saps4hana-source.md#register)| No | [Yes*](register-scan-saps4hana-source.md#lineage) | No| No |No|
\* Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).
purview Quickstart ARM Create Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-ARM-create-microsoft-purview.md
Last updated 04/05/2022 -+ # Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using an ARM template
purview Quickstart Bicep Create Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-bicep-create-microsoft-purview.md
Last updated 09/12/2022 + # Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using a Bicep file
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
This article outlines the process to register an Azure Data Lake Storage Gen1 da
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| No |Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)|[Yes](create-sensitivity-label.md)| No |Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
This article outlines the process to register and govern an Azure Data Lake Stor
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (preview)](#access-policy) | Limited* |[Yes](#data-sharing)|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes](create-sensitivity-label.md)| [Yes (preview)](#access-policy) | Limited* |[Yes](#data-sharing)|
\* *Lineage is supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-s3.md
For this service, use Microsoft Purview to provide a Microsoft account with secu
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| Yes | Yes | Yes | Yes | Yes | No | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| Yes | Yes | Yes | Yes | Yes | [Yes](create-sensitivity-label.md)| No | Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
This article shows how to register an Azure Arc-enabled SQL Server instance. It
## Supported capabilities
-|Metadata extraction|Full scan|Incremental scan|Scoped scan|Classification|Access policy|Lineage|Data sharing|
-|||||||||
-| [Yes](#register)(GA) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#access-policy)(GA) | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)(GA) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | [Yes](#scan)(preview) | No | [Yes](#access-policy)(GA) | Limited** | No |
\** Lineage is supported if the dataset is used as a source/sink in the [Azure Data Factory copy activity](how-to-link-azure-data-factory.md).
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
This article outlines the process to register and govern Azure Blob Storage acco
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes (preview)](#access-policy) | Limited** |[Yes](#data-sharing)|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)|[Yes](#scan) | [Yes](#scan)|[Yes](#scan)| [Yes](create-sensitivity-label.md)| [Yes (preview)](#access-policy) | Limited** |[Yes](#data-sharing)|
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
Title: 'Connect to Azure Cosmos DB Database (SQL API)'
+ Title: 'Connect to Azure Cosmos DB for NoSQL'
description: This article outlines the process to register an Azure Cosmos DB instance in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos DB database
This article outlines the process to register and scan Azure Cosmos DB for NoSQL
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)|[No](#scan) | [Yes](#scan)|[Yes](#scan)|No|No** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)|[No](#scan) | [Yes](#scan)|[Yes](#scan)|[Yes](create-sensitivity-label.md)|No|No** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-data-explorer.md
This article outlines how to register Azure Data Explorer, and how to authentica
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| No | Limited* | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](create-sensitivity-label.md)| No | Limited* | No |
\* *Lineage is supported if dataset is used as a sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-databricks.md
This article outlines how to register Azure Databricks, and how to authenticate
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](#lineage) | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| No| [Yes](#lineage) | No |
When scanning Azure Databricks source, Microsoft Purview supports:
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
This article outlines how to register Azure Files, and how to authenticate and i
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](create-sensitivity-label.md)| No | Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Previously updated : 03/17/2023 Last updated : 04/13/2023
This article outlines how to register multiple Azure sources and how to authenti
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Yes](#access-policy) | [Source Dependant](catalog-lineage-user-guide.md)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan)| [Yes](#scan)| [Source dependant](create-sensitivity-label.md) | [Yes](#access-policy) | [Source Dependant](catalog-lineage-user-guide.md)| No |
## Prerequisites
To learn how to add permissions on each resource type within a subscription or r
## Scan >[!IMPORTANT]
-> Currently, scanning multiple Azure sources is only supported using Azure integration runtime.
+> Currently, scanning multiple Azure sources is only supported using Azure integration runtime, therefore, only Microsoft Purview accounts that allow public access on the firewall can use this option.
Follow the steps below to scan multiple Azure sources to automatically identify assets and classify your data. For more information about scanning in general, see our [introduction to scans and ingestion](concept-scans-and-ingestion.md).
purview Register Scan Azure Mysql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-mysql-database.md
This article outlines how to register a database in Azure Database for MySQL, an
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes*](#scan) | [Yes](#scan) | [Yes](#scan) | No | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)| [Yes*](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](create-sensitivity-label.md) | No | Limited** | No |
\* Microsoft Purview relies on UPDATE_TIME metadata from Azure Database for MySQL for incremental scans. In some cases, this field might not persist in the database and a full scan is performed. For more information, see [The INFORMATION_SCHEMA TABLES Table](https://dev.mysql.com/doc/refman/5.7/en/information-schema-tables-table.html) for MySQL.
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
This article outlines how to register an Azure Database for PostgreSQL deployed
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](create-sensitivity-label.md)| No | Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
This article outlines the process to register an Azure SQL database source in Mi
## Supported capabilities
-|Metadata extraction| Full scan |Incremental scan|Scoped scan|Classification|Access policy|Lineage|Data sharing|
-|||||||||
-| [Yes](#register-the-data-source) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan)| [Yes](#set-up-access-policies) | [Yes (preview)](#extract-lineage-preview) | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register-the-data-source) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan) | [Yes](#scope-and-run-the-scan)|[Yes](#scope-and-run-the-scan)| [Yes](create-sensitivity-label.md)| [Yes](#set-up-access-policies) | [Yes (preview)](#extract-lineage-preview) | No |
> [!NOTE]
-> Data lineage extraction is currently supported only for stored procedure runs. Lineage is also supported if Azure SQL tables or views are used as a source/sink in [Azure Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md).
+> Data lineage extraction is currently supported only for stored procedure runs. Lineage is also supported if Azure SQL tables or views are used as a source/sink in [Azure Data Factory Copy and Data Flow activities](how-to-link-azure-data-factory.md).
When you're scanning Azure SQL Database, Microsoft Purview supports extracting technical metadata from these sources:
purview Register Scan Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-managed-instance.md
This article outlines how to register and Azure SQL Managed Instance, as well as
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No | Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+|||||||||--|
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](create-sensitivity-label.md)| No | Limited** | No |
-\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
+\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
## Prerequisites
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
This article outlines how to register dedicated SQL pools (formerly SQL DW), and
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes](#scan)| [Yes](#scan)| [Yes](#scan)| No | Limited* | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+|||||||||--|
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan)| [Yes](#scan)| [Yes](#scan)|[Yes](create-sensitivity-label.md)| No | Limited* | No |
\* *Lineage is supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
This article outlines how to register Cassandra, and how to authenticate and int
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes](#lineage)| No |
The supported Cassandra server versions are 3.*x* or 4.*x*.
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
This article outlines how to register Db2, and how to authenticate and interact
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes](#lineage)| No |
The supported IBM Db2 versions are Db2 for LUW 9.7 to 11.x. Db2 for z/OS (mainframe) and iSeries (AS/400) aren't supported now.
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
This article outlines how to register erwin Mart servers, and how to authenticat
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes](#lineage)| No |
The supported erwin Mart versions are 9.x to 2021.
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
This article outlines how to register Google BigQuery projects, and how to authe
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes](#lineage)| No |
When scanning Google BigQuery source, Microsoft Purview supports:
purview Register Scan Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hdfs.md
This article outlines how to register Hadoop Distributed File System (HDFS), and
## Supported capabilities
-|**Metadata Extraction**|**Full Scan**|**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No| No | No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No| No | No | No|
When scanning HDFS source, Microsoft Purview supports extracting technical metadata including HDFS:
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
This article outlines how to register Hive Metastore databases, and how to authe
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes*](#lineage) | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes*](#lineage) | No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
This article outlines how to register Looker, and how to authenticate and intera
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| [Yes](#lineage)| No |
The supported Looker server version is 7.2.
purview Register Scan Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mongodb.md
This article outlines how to register MongoDB, and how to authenticate and inter
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No| No | No |
The supported MongoDB versions are 2.6 to 5.1.
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
This article outlines how to register MySQL, and how to authenticate and interac
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)| No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No |No| No| [Yes](#lineage)| No|
The supported MySQL server versions are 5.7 to 8.x.
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
This article outlines how to register on-premises SQL server instances, and how
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | No| Limited** | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+|||||||||--|
+| [Yes](#register) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](#scan) | [Yes](create-sensitivity-label.md)| No| Limited** | No |
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
This article outlines how to register Oracle, and how to authenticate and intera
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No| [Yes*](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No |No| [Yes*](#lineage)| No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
This article outlines how to register PostgreSQL, and how to authenticate and in
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage) | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No |No| No| [Yes](#lineage) | No |
The supported PostgreSQL server versions are 8.4 to 12.x.
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
This article outlines how to register a Power BI tenant in a cross-tenant scenar
## Supported capabilities
-|**Metadata extraction**| **Full scan** |**Incremental scan**|**Scoped scan**|**Classification**|**Access policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#deployment-checklist)| [Yes](#deployment-checklist)| Yes | No | No | No| [Yes](how-to-lineage-powerbi.md)| No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#deployment-checklist)| [Yes](#deployment-checklist)| Yes | No | No | No| No| [Yes](how-to-lineage-powerbi.md)| No|
When scanning Power BI source, Microsoft Purview supports:
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant in a **same-tenant scena
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#deployment-checklist)| [Yes](#deployment-checklist)| Yes | No | No | No| [Yes](how-to-lineage-powerbi.md)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#deployment-checklist)| [Yes](#deployment-checklist)| Yes | No | No |No| No| [Yes](how-to-lineage-powerbi.md)| No |
When scanning Power BI source, Microsoft Purview supports:
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
This article outlines how to register Salesforce, and how to authenticate and in
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No| No | No| No|
When scanning Salesforce source, Microsoft Purview supports extracting technical metadata including:
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
This article outlines how to register SAP Business Warehouse (BW), and how to au
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| No|No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| No|No|No|
The supported SAP BW versions are 7.3 to 7.5. SAP BW/4HANA isn't supported.
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
This article outlines how to register SAP HANA, and how to authenticate and inte
## Supported capabilities
-|**Metadata extraction**| **Full scan** |**Incremental scan**|**Scoped scan**|**Classification**|**Access policy**|**Lineage**| **Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No|No | No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No|No | No | No |
When scanning SAP HANA source, Microsoft Purview supports extracting technical metadata including:
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
This article outlines how to register SAP ECC, and how to authenticate and inter
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes*](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| No| [Yes*](#lineage)| No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
This article outlines how to register SAP S/4HANA, and how to authenticate and i
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes*](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| No| [Yes*](#lineage)| No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
This article outlines how to register Snowflake, and how to authenticate and int
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No| [Yes](#lineage) | No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No| No| [Yes](#lineage) | No|
When scanning Snowflake source, Microsoft Purview supports:
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
This article outlines how to register Azure Synapse Analytics workspaces and how
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | No| [Yes](#scan)| No| [Yes- Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No|
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register) | [Yes](#scan)| [Yes](#scan) | No| [Yes](#scan)| No| No| [Yes- Synapse pipelines](how-to-lineage-azure-synapse-analytics.md)| No|
>[!NOTE] >Currently, Azure Synapse lake databases are not supported.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
This article outlines how to register Teradata, and how to authenticate and inte
## Supported capabilities
-|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
-|||||||||
-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan)| [Yes](#scan)| No | [Yes*](#lineage)| No |
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Labeling**|**Access Policy**|**Lineage**|**Data Sharing**|
+||||||||||
+| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan)| [Yes](#scan)| No| No | [Yes*](#lineage)| No |
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
Title: Azure services
-description: Learn about Region types and service categories in Azure.
+ Title: Available Azure services by region types and categories
+description: Learn about region types and service categories in Azure.
reliability Availability Zones Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-baseline.md
+
+ Title: Azure availability zone migration baseline
+description: Learn how to assess the availability-zone readiness of your application for the purposes of migrating from non-availability zone to availability zone support.
++++ Last updated : 04/06/2023++++
+# Azure availability zone migration baseline
+
+This article shows you how to assess the availability-zone readiness of your application for the purposes of migrating from non-availability zone to availability zone support. We'll take you through the steps you'll need to determine how you can take advantage of availability zone support in alignment with your application and regional requirements. For more detailed information on availability zones and the regions that support them, see [What are Azure regions and availability zones](availability-zones-overview.md).
+
+When creating reliable workloads, you can choose at least one of the following availability zone configurations:
+
+ - **Zonal**. A zonal configuration provides a specific, self-selected availability zone.
+
+ - **Zone-redundant**. A zone-redundant configuration provides resources that are replicated or distributed across zones automatically.
+
+In addition to the two availability zone options, zonal and zone-redundant, Azure offers **Global services**, meaning that they're available globally regardless of region. Because these services are always available across regions, they're resilient to both regional and zonal outages. You don't need to configure or enable these services.
+
+To see which Azure services support availability zones, see [Availability zone service and regional support](availability-zones-service-support.md).
+
+
+>[!NOTE]
+>When you don't select a zone configuration for your resource, whether zonal or zone-redundant, the resource and its sub-components won't be zone resilient and can go down during a zonal outage in that region.
+
+## Considerations for migrating to availability zone support
+
+ There are many ways to create a reliable Azure application with availability zones that meet both SLAs and reliability targets. Follow the steps in this section to choose the right approach for your needs based on technical and regulatory considerations, service capabilities, data residency, compliance requirements, and latency.
+
+### Step 1: Check if the Azure region supports availability zones
+
+In this first step, you need to [validate](availability-zones-service-support.md) that your selected Azure region support availability zones and the required Azure services for your application.
+
+If your region supports availability zones, we highly recommended that you configure your workload for availability zones. If your region doesn't support availability zones, you'll need to use [Azure Resource Mover guidance](/azure/resource-mover/move-region-availability-zone) to migrate to a region that offers availability zone support.
+
+>[!NOTE]
+>For some services, availability zones can only be configured during deployment. If you want to include availability zones for existing services, you may need to redeploy. Please refer to service specific documentation in [Availability zone migration guidance overview for Microsoft Azure products and services](/azure/reliability/availability-zones-migration-overview).
++
+### Step 2: Check for product and SKU availability in the Azure region
+
+In this step, you'll validate that the required Azure services and SKUs are available in the availability zones of your selected Azure region.
+
+To check for regional support of services, see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+
+To list the available VM SKUs by Azure region and zone, see [Check VM SKU availability](/azure/virtual-machines/windows/create-powershell-availability-zone#check-vm-sku-availability).
+
+If your region doesn't support the services and SKUs that your application requires, you'll need to go back to [Step 1: Check the product availability in the Azure region](#step-1-check-if-the-azure-region-supports-availability-zones) to find a new region.
+
+ the services and SKUs that your application requires, we highly recommended that you configure your workload with zone-redundancy. For zonal high availability of Azure IaaS Virtual Machines, use [Virtual Machine Scale Sets Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) to spread VMs across multiple availability zones.
++
+### Step 3: Consider your application requirements
+
+In this final step, you'll determine, based on application requirements, which kind of availability zone support is most suitable to your application.
+
+Below are three important questions that can help you choose the correct availability zone deployment:
+
+#### Does your application include latency sensitive components?
+
+Azure availability zones within the same Azure region are connected by a high-performance network [with a round-trip latency of less than 2 ms](/azure/reliability/availability-zones-overview#availability-zones).
+
+The recommended approach to achieving high availability, if low latency isn't a strict requirement, is to configure your workload with a zone redundant deployment.
+
+For critical application components that require physical proximity and low latency, such as gaming, engineering simulation, and high-frequency trading (HFT), we recommend that you configure a zonal deployment. [Virtual Machine Scale Sets Flex](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes) provides zone aligned compute along with attached storage disks.
++
+#### Does your application code have the readiness to handle a distributed model?
+
+For a [distributed microservices model](/azure/architecture/guide/architecture-styles/microservices) and depending on your application, there's the possibility of ongoing data exchange between microservices across zones. This continual data exchange through APIs, could affect performance. To improve performance and maintain a reliable architecture, you can choose zonal deployment.
+
+With a zonal deployment, you must:
+
+1. Identify latency sensitive resources or services in your architecture.
+1. Confirm that the latency sensitive resources or services support zonal deployment.
+1. Co-locate the latency sensitive resources or services in same zone. Other services in your architecture may continue to remain zone redundant.
+1. Replicate the latency sensitive zonal services across multiple availability zones to ensure you're zone resilient.
+1. Load balance between the multiple zonal deployments with a standard or global load balancers.
+
+If the Azure service supports availability zones, we highly recommend that you use zone-redundancy by spreading nodes across the zones to get higher uptime SLA and protection against zonal outages.
+
+For a 3-tier application, it's important to understand the state (stateful or stateless) of each tier (application, business, and data). State knowledge helps you to architect in alignment with the best practices and guidance according to the type of workload.
+
+For specialized workload on Azure as below examples, refer to the respective landing zone architecture guidance and best practices.
+
+- SAP
+ - [SAP workload configurations with Azure Availability Zones](/azure/sap/workloads/high-availability-zones)
+ - [Azure availability sets vs. availability zones](/azure/cloud-adoption-framework/scenarios/sap/eslz-business-continuity-and-disaster-recovery#azure-availability-sets-vs-availability-zones)
+
+- Azure Virtual Desktop
+ - [Business continuity and disaster recovery considerations for Azure Virtual Desktop](/azure/cloud-adoption-framework/scenarios/wvd/eslz-business-continuity-and-disaster-recovery)
+ - [General availability of support for Azure availability zones in the host pool deployment](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-support-for-azure/ba-p/3636262)
+
+- Azure Kubernetes Service
+ - [Create an Azure Kubernetes Service (AKS) cluster that uses availability zones](/azure/aks/availability-zones)
+ - [Operations management considerations for Azure Kubernetes Service](/azure/cloud-adoption-framework/scenarios/app-platform/aks/management)
+ - [Migrate Azure Kubernetes Service (AKS) and MySQL Flexible Server workloads to availability zone support](/azure/reliability/migrate-workload-aks-mysql)
+
+- Oracle
+ - [Oracle on Azure architecture design](/azure/architecture/solution-ideas/articles/oracle-on-azure-start-here )
++
+#### Do you want to achieve BCDR in the same Azure region due to compliance, data residency, or governance requirements?
+
+To achieve business continuity and disaster recovery within the same region and when there **is no regional pair**, we highly recommend that you configure your workload with zone-redundancy. A single-region approach is also applicable to certain industries that have strict data residency and governance requirements within the same Azure region. To learn how to replicate, failover, and failback Azure virtual machines from one availability zone to another within the same Azure region, see [Enable Azure VM disaster recovery between availability zones](/azure/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery).
+
+If you require multi-region, or if your Azure region doesn't support availability zones, we recommend that you use regional pairs. Regional pairs are situated at far distance at around 100 miles apart, and give you blast radius protection from regional level failures such as fire, flooding, earthquake and other natural or unforeseen calamities. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](/azure/reliability/cross-region-replication-azure).
+
+>[!NOTE]
+>There can be scenarios where a combination of zonal, zone-redundant, and global services works best to meet business and technical requirements.
+
+### Other points to consider
+
+- To learn about testing your applications for availability and resiliency, see [Testing applications for availability and resiliency](/azure/architecture/framework/resiliency/testing).
+
+- Each data center in a region is assigned to a physical zone. Physical zones are mapped to the logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM REST API, [listLocations](/rest/api/resources/subscriptions/list-locations?tabs=HTTP) and set the API version to 2022-12-01 to list the logical zone mapping to physical zone for your subscription. This information is important for critical application components that require co-location with Azure resources categorized as [Strategic services](/azure/reliability/availability-service-by-category#strategic-services) that may not be available in all physical zones.
+
+- Inter-zone bandwidth charges apply when traffic moves across zones. To learn more about bandwidth pricing, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
+
+## Next steps
++
+> [!div class="nextstepaction"]
+> [What are Azure regions and availability zones?](availability-zones-overview.md)
+
+> [!div class="nextstepaction"]
+> [IaaS: Web application with relational database](/azure/architecture/high-availability/ref-arch-iaas-web-and-db)
+
+> [!div class="nextstepaction"]
+> [Build solutions for high availability using availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability)
+
+> [!div class="nextstepaction"]
+> [Azure reliability documentation](overview.md)
+++++
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
Azure services that support availability zones, including zonal and zone-redundant offerings, are continually expanding. For that reason, resources that don't currently have availability zone support, may have an opportunity to gain that support. The Migration Guides section offers a collection of guides for each service that requires certain procedures in order to move a resource from non-availability zone support to availability support. You'll find information on prerequisites for migration, download requirements, important migration considerations and recommendations.
+To check the readiness of your application for availability zone support, see [Azure availability zone migration baseline](./availability-zones-baseline.md).
+ The table below lists each product that offers migration guidance and/or information. ## Azure services migration guides
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
In this article, we'll take you through the different options for availability z
## Prerequisites
-* To configure API Management for zone redundancy, your instance must be in one of the following regions:
-
- * Australia East
- * Brazil South
- * Canada Central
- * Central India
- * Central US
- * East Asia
- * East US
- * East US 2
- * France Central
- * Germany West Central
- * Japan East
- * Korea Central (*)
- * North Europe
- * Norway East
- * South Africa North (*)
- * South Central US
- * Southeast Asia
- * Switzerland North
- * UAE North
- * UK South
- * West Europe
- * West US 2
- * West US 3
-
- > [!IMPORTANT]
- > The regions with * against them have restrictive access in an Azure subscription to enable availability zone support. Please work with your Microsoft sales or customer representative.
+* To configure API Management for zone redundancy, your instance must be in one of the Azure regions with [availability zone support](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
* If you haven't yet created an API Management service instance, see [Create an API Management service instance](../api-management/get-started-create-service-instance.md). Select the Premium service tier.
There are no downtime requirements for any of the migration options.
* Migrating to availability zones or changing the availability zone configuration will trigger a public [IP address change](../api-management/api-management-howto-ip-addresses.md#changes-to-the-ip-addresses).
+* When enabling availability zones in a region, you configure a number of API Management scale [units](../api-management/upgrade-and-scale.md) that can be distributed evenly across the zones. For example, if you configure 2 zones, you could configure 2 units, 4 units, or another multiple of 2 units. Adding units incurs additional costs. For details, see [API Management pricing](https://azure.microsoft.com/pricing/details/api-management/).
+ * If you've configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones. ## Option 1: Migrate existing location of API Management instance, not injected in VNet
remote-rendering Unity Render Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/unity/unity-render-pipelines.md
To use the **:::no-loc text="Universal render pipeline":::**, its package has to
> [!NOTE] > If you're unable to drag and drop the *HybridRenderingPipeline* asset into the Render Pipeline Asset field (possibly because the field doesn't exist!), ensure your package configuration contains the `com.unity.render-pipelines.universal` package.
+## Setup Standard Render Pipeline
+
+Unlike for the **:::no-loc text="Universal render pipeline":::**, there are no extra setup steps required for the **:::no-loc text="Standard render pipeline":::** to work with ARR. Instead, the ARR runtime sets the required render hooks automatically.
+ ## Next steps * [Install the Remote Rendering package for Unity](install-remote-rendering-unity-package.md)
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
+ Last updated 10/24/2022 -
-#Customer intent:
# Prerequisites for Azure role assignment conditions
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
+ Last updated 10/24/2022
role-based-access-control Conditions Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-powershell.md
+ Last updated 10/24/2022
role-based-access-control Conditions Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-template.md
+ Last updated 10/24/2022
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
+ Last updated 01/07/2023 -
-#Customer intent:
# Troubleshoot Azure role assignment conditions
role-based-access-control Custom Roles Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-bicep.md
Last updated 07/01/2022 -+ #Customer intent: As an IT admin, I want to create custom and/or roles using Bicep so that I can start automating custom role processes.
role-based-access-control Custom Roles Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-cli.md
ms.assetid: 3483ee01-8177-49e7-b337-4d5cb14f5e32
na+ Last updated 04/05/2023
az role definition update --role-definition ~/roles/vmoperator.json
- [Tutorial: Create an Azure custom role using Azure CLI](tutorial-custom-role-cli.md) - [Azure custom roles](custom-roles.md)-- [Azure resource provider operations](resource-provider-operations.md)
+- [Azure resource provider operations](resource-provider-operations.md)
role-based-access-control Custom Roles Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-template.md
Last updated 10/19/2022 --+ #Customer intent: As an IT admin, I want to create custom roles by using an Azure Resource Manager template so that I can start automating custom role processes.- # Create or update Azure custom roles using an ARM template
role-based-access-control Quickstart Role Assignments Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-bicep.md
-+ Last updated 06/30/2022
role-based-access-control Quickstart Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-template.md
-+ Last updated 04/28/2021
role-based-access-control Role Assignments List Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-cli.md
ms.assetid: 3483ee01-8177-49e7-b337-4d5cb14f5e32
na+ Last updated 06/03/2022
az role assignment list --scope /providers/Microsoft.Management/managementGroups
## Next steps -- [Assign Azure roles using Azure CLI](role-assignments-cli.md)
+- [Assign Azure roles using Azure CLI](role-assignments-cli.md)
role-based-access-control Role Assignments Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-remove.md
Last updated 10/19/2022 -+ ms.devlang: azurecli
role-based-access-control Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-template.md
Last updated 10/19/2022 -+ ms.devlang: azurecli # Assign Azure roles using Azure Resource Manager templates
route-server Quickstart Configure Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-template.md
Last updated 04/05/2021 -+ # Quickstart: Create an Azure Route Server using an ARM template
route-server Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/resource-manager-template-samples.md
Last updated 02/23/2023-+ # Azure Resource Manager templates for Azure Route Server
sap Advanced State Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/bash/advanced-state-management.md
Last updated 10/21/2021
+ Title: advanced_state_management description: Updates the Terraform state file using a shell script
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
Last updated 03/05/2023
+ # Configure the control plane
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Last updated 12/1/2022
+ # Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
Last updated 05/03/2022
+ # Configure SAP system parameters
az keyvault secret set --name "<prefix>-fencing-spn-tenant" --vault-name "<workl
> [!div class="nextstepaction"] > [Deploy SAP system](deploy-system.md)-
sap Configure Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-webapp.md
Last updated 10/19/2022
+ # Configure the Control Plane Web Application
sap Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-workload-zone.md
Last updated 09/13/2022
+ # Workload zone configuration in SAP automation framework
The table below contains the Terraform parameters. These parameters need to be e
## Next Step > [!div class="nextstepaction"]
-> [About SAP system deployment in automation framework](deploy-workload-zone.md)
+> [About SAP system deployment in automation framework](deploy-workload-zone.md)
sap Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md
Last updated 11/17/2021
+ # Deploy the control plane
sap Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/start-stop-sap-systems.md
In this how-to guide, you'll learn to start and stop your SAP systems through th
Through the Azure portal, you can start and stop: -- Application tier instances, which include ABAP SAP Central Services (ASCS) and Application Server instances. You can start and stop instances in the following types of deployments:
+- Entire SAP Application tier in one go, which include ABAP SAP Central Services (ASCS) and Application Server instances.
+- Individual SAP instances, which include Central Services and Application server instances.
+- HANA Database
+- You can start and stop instances in the following types of deployments:
- Single-Server - High Availability (HA) - Distributed Non-HA
Through the Azure portal, you can start and stop:
## Prerequisites - An SAP system that you've [created in Azure Center for SAP solutions](prepare-network.md) or [registered with Azure Center for SAP solutions](register-existing-system.md).-- For the start operation to work, all virtual machines (VMs) inside the SAP system must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources.
+- For the start operation to work, the underlying virtual machines (VMs) of the SAP instances must be running. This capability starts or stops the SAP application instances, not the VMs that make up the SAP system resources.
- The `sapstartsrv` service must be running on all VMs related to the SAP system. - For HA deployments, the HA interface cluster connector for SAP (`sap_vendor_cluster_connector`) must be installed on the ASCS instance. For more information, see the [SUSE connector specifications](https://www.suse.com/c/sap-netweaver-suse-cluster-integration-new-sap_suse_cluster_connector-version-3-0-0/) and [RHEL connector specifications](https://access.redhat.com/solutions/3606101).
+- For HANA Database, Stop operation is initiated only when the cluster maintenance mode is in **Disabled** status. Similarly, Start operation is initiated only when the cluster maintenance mode is in **Enabled** status.
+
+## Supported scenarios
+The following scenarios are supported when Starting and Stopping SAP systems:
+
+- Stopping and Starting SAP system or individual instances from the VIS resource only stops or starts the SAP application. The underlying VMs are **not** stopped or started.
+- Stopping a highly available SAP system from the VIS resource gracefully stops the SAP instances in the right order and does not result in a failover of Central Services instance.
+- Stopping the HANA Database from the VIS resource results in the entire HANA instance to be stopped. In case of HANA MDC with multiple tenant DBs, the entire instance is stopped and not the specific Tenant DB.
## Stop SAP system
sap Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide.md
Title: SAP BusinessObjects BI Platform Deployment on Azure | Microsoft Docs description: Plan, deploy, and configure SAP BusinessObjects BI Platform on Azure
-tags: azure-resource-manager
-keywords: ''
Previously updated : 10/05/2020 Last updated : 04/13/2023
SAP BusinessObjects BI Platform is a self-contained system that can exist on a s
- **Client Tier:** It contains all desktop client applications that interact with the BI platform to provide different kind of reporting, analytic, and administrative capabilities. - **Web Tier:** It contains web applications deployed to Java web application servers. Web applications provide BI Platform functionality to end users through a web browser. - **Management Tier:** It coordinates and controls all the components that makes the BI Platform. It includes Central Management Server (CMS) and the Event Server and associated services-- **Storage Tier:** It is responsible for handling files, such as documents and reports. It also handles report caching to save system resources when user access reports.
+- **Storage Tier:** It's responsible for handling files, such as documents and reports. It also handles report caching to save system resources when user access reports.
- **Processing Tier:** It analyzes data, and produces reports and other output types. It's the only tier that accesses the databases that contain report data. - **Data Tier:** It consists of the database servers hosting the CMS system databases and Auditing Data Store.
-The SAP BI Platform consists of a collection of servers running on one or more hosts. It's essential that you choose the correct deployment strategy based on the sizing, business need and type of environment. For small installation like development or test, you can use a single Azure Virtual Machine for web application server, database server, and all BI Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from Azure, database server will run separately from other components. For medium and large installation, you can have servers running on multiple Azure virtual machines.
+The SAP BI Platform consists of a collection of servers running on one or more hosts. It's essential that you choose the correct deployment strategy based on the sizing, business need and type of environment. For small installation like development or test, you can use a single Azure Virtual Machine for web application server, database server, and all BI Platform servers. In case you're using Database-as-a-Service (DBaaS) offering from Azure, database server runs separately from other components. For medium and large installation, you can have servers running on multiple Azure virtual machines.
-In below figure, architecture of large-scale deployment of SAP BOBI Platform on Azure virtual machines is shown, where each component is distributed and placed in availability sets that can sustain failover if there is service disruption.
+In below figure, architecture of large-scale deployment of SAP BOBI Platform on Azure virtual machines is shown, where each component is distributed and placed in availability sets that can sustain failover if there's service disruption.
![SAP BusinessObjects BI Platform Architecture on Azure](./media/businessobjects-deployment-guide/businessobjects-architecture-on-azure.png)
In below figure, architecture of large-scale deployment of SAP BOBI Platform on
In Azure, you can either use [Azure Premium Files](../../storage/files/storage-files-introduction.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for File Repository Server. Both of these Azure services have built-in redundancy.
- > [!Important]
- > SMB Protocol for Azure Files is generally available, but NFS Protocol support for Azure Files is currently in preview. For more information, see [NFS 4.1 support for Azure Files is now in preview](https://azure.microsoft.com/blog/nfs-41-support-for-azure-files-is-now-in-preview/)
- - CMS & audit database SAP BOBI Platform requires a database to store its system data, which is referred as CMS database. It's used to store BI platform information such as user, server, folder, document, configuration, and authentication details.
Azure SQL Database offers the following three purchasing models:
It lets you choose the number of vCores, amount of memory, and the amount and speed of storage. The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit for SQL Server](https://azure.microsoft.com/pricing/hybrid-benefit/) to gain cost savings. This model is suited for customer who value flexibility, control, and transparency.
- There are three [Service Tier Options](/azure/azure-sql/database/service-tiers-vcore#service-tiers) being offered in vCore model that include - General Purpose, Business Critical, and Hyperscale. The service tier defines the storage architecture, space, I/O limits, and business continuity options related to availability and disaster recovery. Following is high-level details on each service tier option -
+ There are three [Service Tier Options](/azure/azure-sql/database/service-tiers-vcore#service-tiers) being offered in vCore model that includes - General Purpose, Business Critical, and Hyperscale. The service tier defines the storage architecture, space, I/O limits, and business continuity options related to availability and disaster recovery. Following is high-level details on each service tier option -
1. **General Purpose** service tier is best suited for Business workloads. It offers budget-oriented, balanced, and scalable compute and storage options. For more information, refer [Resource options and limits](/azure/azure-sql/database/resource-limits-vcore-single-databases#general-purposeprovisioned-computegen5). 2. **Business Critical** service tier offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica. For more information, refer [Resource options and limits](/azure/azure-sql/database/resource-limits-vcore-single-databases#business-criticalprovisioned-computegen5).
Azure SQL Database offers the following three purchasing models:
- DTU-based
- The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service tiers, to support light and heavy database workloads. Compute sizes within each tier provide a different mix of these resources, to which you can add additional storage resources. It's best suited for customers who want simple, pre-configure resource options.
+ The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service tiers, to support light and heavy database workloads. Compute sizes within each tier provide a different mix of these resources, to which you can add additional storage resources. It's best suited for customers who want simple, preconfigure resource options.
[Service Tiers](/azure/azure-sql/database/service-tiers-dtu#compare-service-tiers) in the DTU-based purchasing model is differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period of backups, and fixed price.
Azure SQL Database offers the following three purchasing models:
The serverless model automatically scales compute based on workload demand, and bills for the amount of compute used per second. The serverless compute tier automatically pauses databases during inactive periods when only storage is billed, and automatically resumes databases when activity returns. For more information, refer [Resource options and limits](/azure/azure-sql/database/resource-limits-vcore-single-databases#general-purposeserverless-computegen5).
- It's more suitable for intermittent, unpredictable usage with low average compute utilization over time. So this model can be used for non-production SAP BOBI deployment.
+ It's more suitable for intermittent, unpredictable usage with low average compute utilization over time. So this model can be used for nonproduction SAP BOBI deployment.
> [!Note] > For SAP BOBI, it's convenient to use vCore based model and choose either General Purpose or Business Critical service tier based on the business need.
Azure Storage has different Storage types available for customers and details fo
- Azure Premium Files or Azure NetApp Files
- In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories where contents like reports, universes, and connections are stored which are used by all application servers of that system. [Azure Premium Files](../../storage/files/storage-files-introduction.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) storage can be used as a shared file system for SAP BOBI applications FRS. As this storage offering is not available all regions, refer to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) site to find out up-to-date information.
+ In SAP BOBI Platform, File Repository Server (FRS) refers to the disk directories where contents like reports, universes, and connections are stored which are used by all application servers of that system. [Azure Premium Files](../../storage/files/storage-files-introduction.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) storage can be used as a shared file system for SAP BOBI applications FRS. As this storage offering isn't available all regions, refer to [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) site to find out up-to-date information.
If the service is unavailable in your region, you can create NFS server from which you can share the file system to SAP BOBI application. But you'll also need to consider its high availability.
Azure Storage has different Storage types available for customers and details fo
### Networking
-SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premises system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/sap/workloads/planning-guide.md#microsoft-azure-networking).
+SAP BOBI is a reporting and analytics BI platform that doesnΓÇÖt hold any business data. So the system is connected to other database servers from where it fetches all the data and provide insight to users. Azure provides a network infrastructure, which allows the mapping of all scenarios that can be realized with SAP BI Platform like connecting to on-premises system, systems in different virtual network and others. For more information check [Microsoft Azure Networking for SAP Workload](planning-guide.md#61678387-8868-435d-9f8c-450b2424f5bd).
For Database-as-a-Service offering, any newly created database (Azure SQL Database or Azure Database for MySQL) has a firewall that blocks all external connections. To allow access to the DBaaS service from BI Platform virtual machines, you need to specify one or more server-level firewall rules to enable access to your DBaaS server. For more information, see [Firewall rules](../../mysql/concepts-firewall-rules.md) for Azure Database for MySQL and [Network Access Controls](/azure/azure-sql/database/network-access-controls-overview) section for Azure SQL database.
sap Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md
vm-windows
Last updated 12/16/2022 --+ # SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk
For the outlined failover tests, we assume that SAP ASCS is active on node A.
[virtual-machines-azure-resource-manager-architecture-benefits-arm]:../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager
-[virtual-machines-manage-availability]:../../virtual-machines/availability.md
+[virtual-machines-manage-availability]:../../virtual-machines/availability.md
sap Sap High Availability Infrastructure Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-shared-disk.md
vm-windows
Last updated 12/16/2022 --+ # Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS
sap Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-new.md
editor: '' tags: azure-resource-manager+ keywords: '' ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
All error IDs have a unique tag in the form of a-#, where # is a number. It allo
## Next steps * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
-* [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
+* [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
sap Vm Extension For Sap Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-standard.md
editor: '' tags: azure-resource-manager+ keywords: '' ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
Manually setting a static IP address inside the Azure VM is not supported, and m
## Next steps * [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md)
-* [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
+* [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
search Search Get Started Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-arm.md
-+ Last updated 05/25/2022
search Search Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-bicep.md
-+ Last updated 05/16/2022
search Search Index Azure Sql Managed Instance With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-index-azure-sql-managed-instance-with-managed-identity.md
Before learning more about this feature, it is recommended that you have an unde
To assign read permissions on SQL Managed Instance, you must be an Azure Global Admin with a SQL Managed Instance. See [Configure and manage Azure AD authentication with SQL Managed Instance](/azure/azure-sql/database/authentication-aad-configure) and follow the steps to provision an Azure AD admin (SQL Managed Instance).
-* [Configure a public endpoint and network security group in SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) to allow connections from Azure Cognitive Search. If your Azure SQL Managed Instance is configured for private connections, [create a shared private link](search-indexer-howto-access-private.md#create-a-shared-private-link-for-a-sql-managed-instance) in Cognitive Search to allow the connection.
+* [Configure a public endpoint and network security group in SQL Managed Instance](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md) to allow connections from Azure Cognitive Search. If your Azure SQL Managed Instance is configured for private connections, [create a shared private link](search-indexer-how-to-access-private-sql.md) in Cognitive Search to allow the connection.
## 1 - Assign permissions to read the database
search Search Indexer How To Access Private Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-how-to-access-private-sql.md
+
+ Title: Connect to SQL Managed Instance
+
+description: Configure an indexer connection to access content in an Azure SQL Managed instance that's protected through a private endpoint.
+++++ Last updated : 04/12/2023++
+# Create a shared private link for a SQL Managed Instance from Azure Cognitive Search
+
+This article explains how to configure an outbound indexer connection in Azure Cognitive Search to a SQL Managed Instance over a private endpoint.
+
+On a private connection to a SQL Managed Instance, the fully qualified domain name (FQDN) of the instance must include the [DNS Zone](/azure/azure-sql/managed-instance/connectivity-architecture-overview#virtual-cluster-connectivity-architecture). Currently, only the Azure Cognitive Search Management REST API provides a `resourceRegion` parameter for accepting the DNS zone specification.
+
+Although you can call the Management REST API directly, it's easier to use the Azure CLI `az rest` module to send Management REST API calls from a command line.
+
+> [!NOTE]
+> This article relies on Azure portal for obtaining properties and confirming steps. However, when creating the shared private link for SQL Managed Instance, be sure to use the REST API. Although the Networking tab lists `Microsoft.Sql/managedInstances` as an option, the portal doesn't currently support the extended URL format used by SQL Managed Instance.
+
+## Prerequisites
+++ [Azure CLI](/cli/azure/install-azure-cli)+++ Azure Cognitive Search, Basic or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, use Standard 2 (S2) or higher. See [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits) for details.+++ Azure SQL Managed Instance, configured to run in a virtual network, with a private endpoint created through Azure Private Link.+++ You should have a minimum of Contributor permissions on both Azure Cognitive Search and SQL Managed Instance.+
+## 1 - Private endpoint verification
+
+Check whether the managed instance has a private endpoint.
+
+1. [Sign in to Azure portal](https://portal.azure.com/).
+
+1. Type "private link" in the top search bar, and then select **Private Link** to open the Private Link Center.
+
+1. Select **Private endpoints** to view existing endpoints. You should see your SQL Managed Instance in this list.
+
+## 2 - Retrieve connection information
+
+Retrieve the FQDN of the managed instance, including the DNS zone. The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.a1b22c333d44.database.windows.net`, the DNS zone is `a1b22c333d44`.
+
+1. In Azure portal, find the SQL managed instance object.
+
+1. On the **Overview** tab, locate the Host property. Copy the DNS zone portion of the FQDN for the next step.
+
+1. On the **Connection strings** tab, copy the ADO.NET connection string for a later step. It's needed for the data source connection when testing the private connection.
+
+For more information about connection properties, see [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart?view=azuresql#retrieve-connection-details-to-sql-managed-instance&preserve-view=true).
+
+## 3 - Create the body of the request
+
+1. Using a text editor, create the JSON for the shared private link.
+
+ ```json
+ {
+ "name": "{{shared-private-link-name}}",
+ "properties": {
+ "privateLinkResourceId": "/subscriptions/{{target-resource-subscription-ID}}/resourceGroups/{{target-resource-rg}}/providers/Microsoft.Sql/managedInstances/{{target-resource-name}}",
+ "resourceRegion": "a1b22c333d44",
+ "groupId": "managedInstance",
+ "requestMessage": "please approve",
+ }
+ }
+ ```
+
+1. Provide a meaningful name for the shared private link. The shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
+
+1. Paste in the DNS zone name in "resourceRegion" that you retrieved in an earlier step.
+
+1. Edit the "privateLinkResourceId" to reflect the private endpoint of your managed instance. Provide the subscription ID, resource group name, and object name of the managed instance.
+
+1. Save the file locally as *create-pe.json* (or use another name, remembering to update the Azure CLI syntax in the next step).
+
+1. In the Azure CLI, type `dir` to note the current location of the file.
+
+## 4 - Create a shared private link
+
+1. From the command line, sign into Azure using `az login`.
+
+1. If you have multiple subscriptions, make sure you're using the one you intend to use: `az account show`.
+
+ To set the subscription, use `az account set --subscription {{subscription ID}}`
+
+1. Call the `az rest` command to use the [Management REST API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update) of Azure Cognitive Search.
+
+ Because shared private link support for SQL managed instances is still in preview, you need a preview version of the REST API. You can use either `2021-04-01-preview` or `2020-08-01-preview`.
+
+ ```azurecli
+ az rest --method put --uri https://management.azure.com/subscriptions/{{search-service-subscription-ID}}/resourceGroups/{{search service-resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version=2021-04-01-preview --body @create-pe.json
+ ```
+
+ Provide the subscription ID, resource group name, and service name of your Cognitive Search resource.
+
+ Provide the same shared private link name that you specified in the JSON body.
+
+ Provide a path to the *create-pe.json* file if you've navigated away from the file location. You can type `dir` at the command line to confirm the file is in the current directory.
+
+1. Press Enter to run the command.
+
+When you complete these steps, you should have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
+
+## 5 - Approve the private endpoint connection
+
+On the SQL Managed Instance side, the resource owner must approve the private connection request you created.
+
+1. In the Azure portal, open the **Private endpoint connections** tab of the managed instance.
+
+1. Find the section that lists the private endpoint connections.
+
+1. Select the connection, and then select **Approve**. It can take a few minutes for the status to be updated in the portal.
+
+After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
+
+## 6 - Check shared private link status
+
+On the Azure Cognitive Search side, you can confirm request approval by revisiting the Shared Private Access tab of the search service **Networking** page. Connection state should be approved.
+
+ ![Screenshot of the Azure portal, showing an "Approved" shared private link resource.](media\search-indexer-howto-secure-access\new-shared-private-link-resource-approved.png)
+
+## 7 - Configure the indexer to run in the private environment
+
+You can now configure an indexer and its data source to use an outbound private connection to your managed instance.
+
+You could use the [**Import data**](search-get-started-portal.md) wizard for this step, but the indexer that's generated won't be valid for this scenario. You'll need to modify the indexer JSON property as described in this step to make it compliant for this scenario. You'll then need to [reset and rerun the indexer](search-howto-run-reset-indexers.md) to fully test the pipeline using the updated indexer.
+
+This article assumes Postman or equivalent tool, and uses the REST APIs to make it easier to see all of the properties. Recall that REST API calls for indexers and data sources use the [Search REST APIs](/rest/api/searchservice/), not the [Management REST APIs](/rest/api/searchmanagement/) used to create the shared private link. The syntax and API versions are different between the two REST APIs.
+
+1. [Create the data source definition](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) as you would normally for Azure SQL. Although the format of the connection string is different, the data source type and other properties are valid for SQL Managed Instance.
+
+ Provide the connection string that you copied earlier.
+
+ ```http
+ POST https://myservice.search.windows.net/datasources?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: admin-key
+ {
+ "name" : "my-sql-datasource",
+ "description" : "A database for testing Azure Cognitive Search indexes.",
+ "type" : "azuresql",
+ "credentials" : {
+ "connectionString" : "Server=tcp:contoso.public.0000000000.database.windows.net,1433; Persist Security Info=false; User ID=<your user name>; Password=<your password>;MultipleActiveResultsSets=False; Encrypt=True;Connection Timeout=30;"
+ },
+ "container" : {
+ "name" : "Name of table or view to index",
+ "query" : null (not supported in the Azure SQL indexer)
+ },
+ "dataChangeDetectionPolicy": null,
+ "dataDeletionDetectionPolicy": null,
+ "encryptionKey": null,
+ "identity": null
+ }
+ ```
+
+ > [!NOTE]
+ > If you're familiar with data source definitions in Cognitive Search, you'll notice that data source properties don't vary when using a shared private link. That's because the private connection is detected and handled internally.
+
+1. [Create the indexer definition](search-howto-create-indexers.md), setting the indexer execution environment to "private".
+
+ [Indexer execution](search-indexer-securing-resources.md#indexer-execution-environment) occurs in either a private environment that's specific to the search service, or a multi-tenant environment that's used internally to offload expensive skillset processing for multiple customers. **When connecting over a private endpoint, indexer execution must be private.**
+
+ ```http
+ POST https://myservice.search.windows.net/indexers?api-version=2020-06-30
+ Content-Type: application/json
+ api-key: admin-key
+ {
+ "name": "indexer",
+ "dataSourceName": "my-sql-datasource",
+ "targetIndexName": "my-search-index",
+ "parameters": {
+ "configuration": {
+ "executionEnvironment": "private"
+ }
+ },
+ "fieldMappings": []
+ }
+ ```
+
+1. Run the indexer. If the indexer execution succeeds and the search index is populated, the shared private link is working.
+
+You can monitor the status of the indexer in Azure portal or by using the [Indexer Status API](/rest/api/searchservice/get-indexer-status).
+
+You can use [**Search explorer**](search-explorer.md) in Azure portal to check the contents of the index.
+
+## 8 - Test the shared private link
+
+If you ran the indexer in the previous step and successfully indexed content from your managed instance, then the test was successful. However, if the indexer fails or there's no content in the index, you can modify your objects and repeat testing by choosing any client that can invoke an outbound request from an indexer.
+
+An easy choice is [running an indexer](search-howto-run-reset-indexers.md) in Azure portal, but you can also try Postman and REST APIs for more precision. Assuming that your search service isn't also configured for a private connection, the REST client connection to Search can be over the public internet.
+
+Here are some reminders for testing:
+++ If you use Postman or another web testing tool, use the [Management REST API](/rest/api/searchmanagement/) and a [preview API version](/rest/api/searchmanagement/management-api-versions) to create the shared private link. Use the [Search REST API](/rest/api/searchservice/) and a [stable API version](/rest/api/searchservice/search-service-api-versions) to create and invoke indexers and data sources.+++ You can use the Import data wizard to create an indexer, data source, and index. However, the generated indexer won't have the correct execution environment setting.+++ You can edit data source and indexer JSON in Azure portal to change properties, including the execution environment and the connection string.+++ You can reset and rerun the indexer in Azure portal. Reset is important for this scenario because it forces a full reprocessing of all documents.+++ You can use Search explorer to check the contents of the index.+
+## See also
+++ [Make outbound connections through a private endpoint](search-indexer-howto-access-private.md)++ [Indexer connections to Azure SQL Managed Instance through a public endpoint](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md)++ [Index data from Azure SQL](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)++ [Management REST API](/rest/api/searchmanagement/)++ [Search REST API](/rest/api/searchservice/)++ [Quickstart: Get started with REST](search-get-started-rest.md)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Last updated 02/22/2023
If you have an Azure PaaS resource that has a private connection enabled through [Azure Private Link](../private-link/private-link-overview.md), you'll need to create a *shared private link* to reach those resources from Azure Cognitive Search. This article walks you through the steps for creating, testing, and managing a private link.
+If you're setting up a private connection to a SQL Managed Instance, see [this article](search-indexer-how-to-access-private-sql.md) instead.
+ ## When to use a shared private link Cognitive Search makes outbound calls to other Azure PaaS resources in the following scenarios:
You can create a shared private link for the following resources.
<sup>3</sup> The `Microsoft.Web/sites` resource type is used for App service and Azure functions. In the context of Azure Cognitive Search, an Azure function is the more likely scenario. An Azure function is commonly used for hosting the logic of a custom skill. Azure Function has Consumption, Premium and Dedicated [App Service hosting plans](../app-service/overview-hosting-plans.md). The [App Service Environment (ASE)](../app-service/environment/overview.md) and [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) aren't supported at this time.
-<sup>4</sup> Although `Microsoft.Sql/managedInstances` is listed in the search **Networking** portal page, creating a shared private link to Azure SQL Managed Instance (preview) requires using the REST API, Azure PowerShell, or the Azure CLI with the `az rest` command. The portal doesn't currently construct a valid fully qualified domain name for SQL Managed instances. For a workaround, see [Create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance).
+<sup>4</sup> See [Create a shared private link for a SQL Managed Instance](search-indexer-how-to-access-private-sql.md) for instructions.
### Private endpoint verification
Here are a few tips:
+ Don't skip the [private link verification](#private-endpoint-verification) step. It's possible to create a shared private link for an Azure PaaS resource that doesn't have a private endpoint. The link won't work if the resource isn't registered.
-+ SQL managed instance has extra requirements for creating a private link. Currently, you can't use the portal or the Azure CLI `az search` command because neither one formulates a valid URI. Instead, follow the instructions in [Create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance) in this article for a workaround.
- When you complete these steps, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational. ### [**Azure portal**](#tab/portal-create)
When you complete these steps, you have a shared private link that's provisioned
> [!NOTE] > Preview API versions, either `2020-08-01-preview` or `2021-04-01-preview`, are required for group IDs that are in preview. The following resource types are in preview: `managedInstance`, `mySqlServer`, `sites`.
-> For `managedInstance`, see [create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance) for help formulating a fully qualified domain name.
While tools like Azure portal, Azure PowerShell, or the Azure CLI have built-in mechanisms for account sign-in, a REST client like Postman needs to provide a bearer token that allows your request to go through.
A `202 Accepted` response is returned on success. The process of creating an out
+ A private DNS zone for the type of resource, based on the group ID. By deploying this resource, you ensure that any DNS lookup to the private resource utilizes the IP address that's associated with the private endpoint.
-### Create a shared private link for a SQL Managed Instance
-
-Currently, you can't create a shared private link for a SQL Managed Instance using the Azure portal or the `az search` module of the Azure CLI. The URI for a SQL Managed Instance includes a DNS zone as part of it's fully qualified domain name (FQDN), and currently neither the portal nor `az search` in the Azure CLI support that part.
-
-As a workaround, choose an approach that provides a `resourceRegion` parameter. This parameter takes the [DNS Zone](/azure/azure-sql/managed-instance/connectivity-architecture-overview#virtual-cluster-connectivity-architecture) of the SQL Managed Instance, which is inserted in the URI to create the FQDN.
-
-Approaches that provide `resourceRegion` include the Management REST API or the Azure CLI using the `az rest` command. This section explains how to the Azure CLI with `az rest` to create a shared private link for a SQL managed instance.
-
-1. Get the [DNS Zone](/azure/azure-sql/managed-instance/connectivity-architecture-overview#virtual-cluster-connectivity-architecture) for the `resourceRegion` parameter.
-
- The DNS zone is part of the domain name of the SQL Managed Instance. For example, if the FQDN of the SQL Managed Instance is `my-sql-managed-instance.a1b22c333d44.database.windows.net`, the DNS zone is `a1b22c333d44`. See [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart) for instructions on how to retrieve connection details, such as the DNS zone.
-
-1. Create a JSON file for the body of the create shared private link request. Save the file locally. In the Azure CLI, type `dir` to view the current location. The following is an example of what a *create-pe.json* file might contain:
-
- ```json
- {
- "name": "{{shared-private-link-name}}",
- "properties": {
- "privateLinkResourceId": "/subscriptions/{{target-resource-subscription-ID}}/resourceGroups/{{target-resource-rg}}/providers/Microsoft.Sql/managedInstances/{{target-resource-name}}",
- "resourceRegion": "a1b22c333d44",
- "groupId": "managedInstance",
- "requestMessage": "please approve",
- }
- }
- ```
-
-1. Using the Azure CLI, call the `az rest` command to use the [Management REST API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/create-or-update) of Azure Cognitive Search.
-
- Because shared private link support for SQL managed instances is still in preview, you need a preview version of the REST API. You can use either `2021-04-01-preview` or `2020-08-01-preview`.
-
- ```azurecli
- az rest --method put --uri https://management.azure.com/subscriptions/{{search-service-subscription-ID}}/resourceGroups/{{search service-resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version=2020-08-01 --body @create-pe.json
- ```
- <!-- 1. Check the response. The `PUT` call to create the shared private endpoint returns an `Azure-AsyncOperation` header value that looks like the following:
Approaches that provide `resourceRegion` include the Management REST API or the
## 2 - Approve the private endpoint connection
-The resource owner must approve the connection request you created. This section assumes the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2022-05-15/private-endpoint-connections) are two examples.
+The resource owner must approve the connection request you created. This section assumes the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2022-11-15/private-endpoint-connections) are two examples.
1. In the Azure portal, open the **Networking** page of the Azure PaaS resource.
The resource owner must approve the connection request you created. This section
![Screenshot of the Azure portal, showing an "Approved" status on the "Private endpoint connections" pane.](media\search-indexer-howto-secure-access\storage-privateendpoint-after-approval.png)
-After the private endpoint connection request is approved, traffic is *capable* of flowing through the private endpoint. After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
+After the private endpoint is approved, Azure Cognitive Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
## 3 - Check shared private link status
security Threat Modeling Tool Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authorization.md
na
Last updated 02/07/2017 -+ # Security Frame: Authorization | Mitigations
security Antimalware Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware-code-samples.md
na
Last updated 01/25/2023 -+ # Enable and configure Microsoft Antimalware for Azure Resource Manager VMs You can enable and configure Microsoft Antimalware for Azure Resource Manager VMs. This article provides code samples using PowerShell cmdlets.
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Title: Detect threats with built-in analytics rules in Microsoft Sentinel | Micr
description: Learn how to use out-of-the-box threat detection rules, based on built-in templates, that notify you when something suspicious happens. + Last updated 11/09/2021
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
description: Learn how to create custom analytics rules to detect security threa
+ Last updated 01/08/2023
service-bus-messaging Enable Auto Forward https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-auto-forward.md
Title: Enable auto forwarding for Azure Service Bus queues and subscriptions
description: This article explains how to enable auto forwarding for queues and subscriptions by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 04/19/2021 -+ ms.devlang: azurecli
service-bus-messaging Enable Dead Letter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-dead-letter.md
Title: Enable dead lettering for Azure Service Bus queues and subscriptions description: This article explains how to enable dead lettering for queues and subscriptions by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) -+ Last updated 11/09/2022
service-bus-messaging Enable Duplicate Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-duplicate-detection.md
Title: Enable duplicate message detection - Azure Service Bus
description: This article explains how to enable duplicate message detection using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 04/19/2021 -+ ms.devlang: azurecli
service-bus-messaging Enable Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-message-sessions.md
Title: Enable Azure Service Bus message sessions | Microsoft Docs
description: This article explains how to enable message sessions using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 04/19/2021 -+ ms.devlang: azurecli
service-bus-messaging Enable Partitions Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-basic-standard.md
Title: Enable partitioning in Azure Service Bus basic or standard
description: This article explains how to enable partitioning in Azure Service Bus queues and topics by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 10/12/2022 -+ ms.devlang: azurecli
service-bus-messaging Enable Partitions Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md
Title: Enable partitioning in Azure Service Bus Premium namespaces
description: This article explains how to enable partitioning in Azure Service Bus Premium namespaces by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 10/12/2022 -+ ms.devlang: azurecli
service-bus-messaging Message Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-counters.md
Title: Azure Service Bus - message count
description: Retrieve the count of messages held in queues and subscriptions by using Azure Resource Manager and the Azure Service Bus NamespaceManager APIs. Last updated 12/20/2022 -+ ms.devlang: azurecli
service-bus-messaging Service Bus Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md
Title: Configure IP firewall rules for Azure Service Bus description: How to use Firewall Rules to allow connections from specific IP addresses to Azure Service Bus. + Last updated 02/16/2023
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
Title: Get started with Azure Service Bus queues (Java) description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the Java programming language. Previously updated : 03/24/2022 Last updated : 04/12/2023 ms.devlang: java
> * [JavaScript](service-bus-nodejs-how-to-use-queues.md) > * [Python](service-bus-python-how-to-use-queues.md)
-In this quickstart, you'll create a Java app to send messages to and receive messages from an Azure Service Bus queue.
+In this quickstart, you create a Java app to send messages to and receive messages from an Azure Service Bus queue.
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built Java samples for Azure Service Bus in the [Azure SDK for Java repository on GitHub](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/servicebus/azure-messaging-servicebus/src/samples).
In this quickstart, you'll create a Java app to send messages to and receive mes
## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue. Note down the **connection string** for your Service Bus namespace and the name of the **queue** you created. - Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation). +++ ## Send messages to a queue
-In this section, you'll create a Java console project, and add code to send messages to the queue that you created earlier.
+In this section, you create a Java console project, and add code to send messages to the queue that you created earlier.
### Create a Java console project Create a Java project using Eclipse or a tool of your choice.
Create a Java project using Eclipse or a tool of your choice.
### Configure your application to use Service Bus Add references to Azure Core and Azure Service Bus libraries.
-If you are using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
+If you're using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
++
+### [Passwordless (Recommended)](#tab/passwordless)
+Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
```xml
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>org.myorg.sbusquickstarts</groupId>
- <artifactId>sbustopicqs</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <build>
- <sourceDirectory>src</sourceDirectory>
- <plugins>
- <plugin>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.8.1</version>
- <configuration>
- <release>15</release>
- </configuration>
- </plugin>
- </plugins>
- </build>
- <dependencies>
+ <dependencies>
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.7.0</version>
+ <version>7.13.3</version>
</dependency>
- </dependencies>
-</project>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+ <scope>compile</scope>
+ </dependency>
+ </dependencies>
```
+### [Connection String](#tab/connection-string)
+Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
+```
++ ### Add code to send messages to the queue+ 1. Add the following `import` statements at the topic of the Java file.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ ```java
+ import com.azure.messaging.servicebus.*;
+ import com.azure.identity.*;
+
+ import java.util.concurrent.CountDownLatch;
+ import java.util.concurrent.TimeUnit;
+ import java.util.Arrays;
+ import java.util.List;
+ ```
+
+ ### [Connection String](#tab/connection-string)
+
```java import com.azure.messaging.servicebus.*;
If you are using Eclipse and created a Java console application, convert your Ja
import java.util.Arrays; import java.util.List; ```
-5. In the class, define variables to hold connection string and queue name as shown below:
+
+2. In the class, define variables to hold connection string and queue name.
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ ```java
+ static String queueName = "<QUEUE NAME>";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `<QUEUE NAME>` with the name of the queue.
+
+ ### [Connection String](#tab/connection-string)
```java static String connectionString = "<NAMESPACE CONNECTION STRING>"; static String queueName = "<QUEUE NAME>"; ```
- Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. And, replace `<QUEUE NAME>` with the name of the queue.
+ > [!IMPORTANT]
+ > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace, and `<QUEUE NAME>` with the name of the queue.
+
+
3. Add a method named `sendMessage` in the class to send one message to the queue.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+
+ ```java
+ static void sendMessage()
+ {
+ // create a token using the default Azure credential
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .queueName(queueName)
+ .buildClient();
+
+ // send one message to the queue
+ senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
+ System.out.println("Sent a single message to the queue: " + queueName);
+ }
+
+ ```
+
+ ### [Connection String](#tab/connection-string)
+ ```java static void sendMessage() {
If you are using Eclipse and created a Java console application, convert your Ja
System.out.println("Sent a single message to the queue: " + queueName); } ```
-1. Add a method named `createMessages` in the class to create a list of messages. Typically, you get these messages from different parts of your application. Here, we create a list of sample messages.
+
+4. Add a method named `createMessages` in the class to create a list of messages. Typically, you get these messages from different parts of your application. Here, we create a list of sample messages.
```java static List<ServiceBusMessage> createMessages()
If you are using Eclipse and created a Java console application, convert your Ja
return Arrays.asList(messages); } ```
-1. Add a method named `sendMessageBatch` method to send messages to the queue you created. This method creates a `ServiceBusSenderClient` for the queue, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the queue.
+5. Add a method named `sendMessageBatch` method to send messages to the queue you created. This method creates a `ServiceBusSenderClient` for the queue, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the queue.
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+
+ ```java
+ static void sendMessageBatch()
+ {
+ // create a token using the default Azure credential
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .queueName(queueName)
+ .buildClient();
+
+ // Creates an ServiceBusMessageBatch where the ServiceBus.
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
+ List<ServiceBusMessage> listOfMessages = createMessages();
+
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
+ // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
+ // messages are sent.
+ for (ServiceBusMessage message : listOfMessages) {
+ if (messageBatch.tryAddMessage(message)) {
+ continue;
+ }
+
+ // The batch is full, so we create a new batch and send the batch.
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the queue: " + queueName);
+
+ // create a new batch
+ messageBatch = senderClient.createMessageBatch();
+
+ // Add that message that we couldn't before.
+ if (!messageBatch.tryAddMessage(message)) {
+ System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
+ }
+ }
+
+ if (messageBatch.getCount() > 0) {
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the queue: " + queueName);
+ }
+
+ //close the client
+ senderClient.close();
+ }
+ ```
+
+ ### [Connection String](#tab/connection-string)
+
```java static void sendMessageBatch() {
If you are using Eclipse and created a Java console application, convert your Ja
} ```
+
+ ## Receive messages from a queue
-In this section, you'll add code to retrieve messages from the queue.
+In this section, you add code to retrieve messages from the queue.
1. Add a method named `receiveMessages` to receive messages from the queue. This method creates a `ServiceBusProcessorClient` for the queue by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
++
+ ```java
+ // handles received messages
+ static void receiveMessages() throws InterruptedException
+ {
+ CountDownLatch countdownLatch = new CountDownLatch(1);
+
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .processor()
+ .queueName(queueName)
+ .processMessage(QueueTest::processMessage)
+ .processError(context -> processError(context, countdownLatch))
+ .buildProcessorClient();
+
+ System.out.println("Starting the processor");
+ processorClient.start();
+
+ TimeUnit.SECONDS.sleep(10);
+ System.out.println("Stopping and closing the processor");
+ processorClient.close();
+ }
+ ```
+
+ ### [Connection String](#tab/connection-string)
+ > [!IMPORTANT] > Replace `QueueTest` in `QueueTest::processMessage` in the code with the name of your class.
In this section, you'll add code to retrieve messages from the queue.
processorClient.close(); } ```
+
2. Add the `processMessage` method to process a message received from the Service Bus subscription. ```java
In this section, you'll add code to retrieve messages from the queue.
``` ## Run the app+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
+1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+
+ 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
+ 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
+
+ ```azurecli
+ az login
+ ```
+1. Run the Jar file using the following command.
+
+ ```java
+ java -jar <JAR FILE NAME>
+ ```
+1. You see the following output in the console window.
+
+ ```console
+ Sent a single message to the queue: myqueue
+ Sent a batch of messages to the queue: myqueue
+ Starting the processor
+ Processing message. Session: 88d961dd801f449e9c3e0f8a5393a527, Sequence #: 1. Contents: Hello, World!
+ Processing message. Session: e90c8d9039ce403bbe1d0ec7038033a0, Sequence #: 2. Contents: First message
+ Processing message. Session: 311a216a560c47d184f9831984e6ac1d, Sequence #: 3. Contents: Second message
+ Processing message. Session: f9a871be07414baf9505f2c3d466c4ab, Sequence #: 4. Contents: Third message
+ Stopping and closing the processor
+ ```
+
+### [Connection String](#tab/connection-string)
When you run the application, you see the following messages in the console window. ```console
Processing message. Session: 311a216a560c47d184f9831984e6ac1d, Sequence #: 3. Co
Processing message. Session: f9a871be07414baf9505f2c3d466c4ab, Sequence #: 4. Contents: Third message Stopping and closing the processor ```+ On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (Java) description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the Java programming language. Previously updated : 03/24/2022 Last updated : 04/12/2023 ms.devlang: java
In this quickstart, you write Java code using the azure-messaging-servicebus pac
## Prerequisites - An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- Follow steps in the [Quickstart: Use the Azure portal to create a Service Bus topic and subscriptions to the topic](service-bus-quickstart-topics-subscriptions-portal.md). Note down the connection string, topic name, and a subscription name. You'll use only one subscription for this quickstart. - Install [Azure SDK for Java][Azure SDK for Java]. If you're using Eclipse, you can install the [Azure Toolkit for Eclipse][Azure Toolkit for Eclipse] that includes the Azure SDK for Java. You can then add the **Microsoft Azure Libraries for Java** to your project. If you're using IntelliJ, see [Install the Azure Toolkit for IntelliJ](/azure/developer/java/toolkit-for-intellij/installation). ++ ## Send messages to a topic
-In this section, you'll create a Java console project, and add code to send messages to the topic you created.
+In this section, you create a Java console project, and add code to send messages to the topic you created.
### Create a Java console project Create a Java project using Eclipse or a tool of your choice.
Create a Java project using Eclipse or a tool of your choice.
### Configure your application to use Service Bus Add references to Azure Core and Azure Service Bus libraries.
-If you are using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
+If you're using Eclipse and created a Java console application, convert your Java project to a Maven: right-click the project in the **Package Explorer** window, select **Configure** -> **Convert to Maven project**. Then, add dependencies to these two libraries as shown in the following example.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+Update the `pom.xml` file to add dependencies to Azure Service Bus and Azure Identity packages.
```xml
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>org.myorg.sbusquickstarts</groupId>
- <artifactId>sbustopicqs</artifactId>
- <version>0.0.1-SNAPSHOT</version>
- <build>
- <sourceDirectory>src</sourceDirectory>
- <plugins>
- <plugin>
- <artifactId>maven-compiler-plugin</artifactId>
- <version>3.8.1</version>
- <configuration>
- <release>15</release>
- </configuration>
- </plugin>
- </plugins>
- </build>
- <dependencies>
+ <dependencies>
<dependency> <groupId>com.azure</groupId> <artifactId>azure-messaging-servicebus</artifactId>
- <version>7.7.0</version>
+ <version>7.13.3</version>
</dependency>
- </dependencies>
-</project>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.0</version>
+ <scope>compile</scope>
+ </dependency>
+ </dependencies>
```
+### [Connection String](#tab/connection-string)
+Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-servicebus</artifactId>
+ <version>7.13.3</version>
+ </dependency>
+```
++ ### Add code to send messages to the topic 1. Add the following `import` statements at the topic of the Java file.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ ```java
+ import com.azure.messaging.servicebus.*;
+ import com.azure.identity.*;
+
+ import java.util.concurrent.CountDownLatch;
+ import java.util.concurrent.TimeUnit;
+ import java.util.Arrays;
+ import java.util.List;
+ ```
+
+ ### [Connection String](#tab/connection-string)
+
```java import com.azure.messaging.servicebus.*;
If you are using Eclipse and created a Java console application, convert your Ja
import java.util.Arrays; import java.util.List; ```
-5. In the class, define variables to hold connection string and topic name as shown below:
+
+2. In the class, define variables to hold connection string (not needed for passwordless scenario), topic name, and subscription name.
+
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ ```java
+ static String topicName = "<TOPIC NAME>";
+ static String subName = "<SUBSCRIPTION NAME>";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the topic's subscription.
+
+ ### [Connection String](#tab/connection-string)
```java static String connectionString = "<NAMESPACE CONNECTION STRING>"; static String topicName = "<TOPIC NAME>"; static String subName = "<SUBSCRIPTION NAME>"; ```
+
+ > [!IMPORTANT]
+ > Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. Replace `<TOPIC NAME>` with the name of the topic, and `<SUBSCRIPTION NAME>` with the name of the topic's subscription.
- Replace `<NAMESPACE CONNECTION STRING>` with the connection string to your Service Bus namespace. And, replace `<TOPIC NAME>` with the name of the topic.
+
3. Add a method named `sendMessage` in the class to send one message to the topic.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+
+ ```java
+ static void sendMessage()
+ {
+ // create a token using the default Azure credential
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .topicName(topicName)
+ .buildClient();
+
+ // send one message to the topic
+ senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
+ System.out.println("Sent a single message to the topic: " + topicName);
+ }
+
+ ```
+ ### [Connection String](#tab/connection-string)
+ ```java static void sendMessage() {
- // create a Service Bus Sender client for the queue
+ // create a Service Bus Sender client for the topic
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder() .connectionString(connectionString) .sender()
If you are using Eclipse and created a Java console application, convert your Ja
System.out.println("Sent a single message to the topic: " + topicName); } ```
+
1. Add a method named `createMessages` in the class to create a list of messages. Typically, you get these messages from different parts of your application. Here, we create a list of sample messages. ```java
If you are using Eclipse and created a Java console application, convert your Ja
``` 1. Add a method named `sendMessageBatch` method to send messages to the topic you created. This method creates a `ServiceBusSenderClient` for the topic, invokes the `createMessages` method to get the list of messages, prepares one or more batches, and sends the batches to the topic.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
++
+ ```java
+ static void sendMessageBatch()
+ {
+ // create a token using the default Azure credential
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .sender()
+ .topicName(topicName)
+ .buildClient();
+
+ // Creates an ServiceBusMessageBatch where the ServiceBus.
+ ServiceBusMessageBatch messageBatch = senderClient.createMessageBatch();
+
+ // create a list of messages
+ List<ServiceBusMessage> listOfMessages = createMessages();
+
+ // We try to add as many messages as a batch can fit based on the maximum size and send to Service Bus when
+ // the batch can hold no more messages. Create a new batch for next set of messages and repeat until all
+ // messages are sent.
+ for (ServiceBusMessage message : listOfMessages) {
+ if (messageBatch.tryAddMessage(message)) {
+ continue;
+ }
+
+ // The batch is full, so we create a new batch and send the batch.
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the topic: " + topicName);
+
+ // create a new batch
+ messageBatch = senderClient.createMessageBatch();
+
+ // Add that message that we couldn't before.
+ if (!messageBatch.tryAddMessage(message)) {
+ System.err.printf("Message is too large for an empty batch. Skipping. Max size: %s.", messageBatch.getMaxSizeInBytes());
+ }
+ }
+
+ if (messageBatch.getCount() > 0) {
+ senderClient.sendMessages(messageBatch);
+ System.out.println("Sent a batch of messages to the topic: " + topicName);
+ }
+
+ //close the client
+ senderClient.close();
+ }
+ ```
+
+ ### [Connection String](#tab/connection-string)
+ ```java static void sendMessageBatch() {
If you are using Eclipse and created a Java console application, convert your Ja
senderClient.close(); } ```
+
## Receive messages from a subscription
-In this section, you'll add code to retrieve messages from a subscription to the topic.
+In this section, you add code to retrieve messages from a subscription to the topic.
1. Add a method named `receiveMessages` to receive messages from the subscription. This method creates a `ServiceBusProcessorClient` for the subscription by specifying a handler for processing messages and another one for handling errors. Then, it starts the processor, waits for few seconds, prints the messages that are received, and then stops and closes the processor.
+ ### [Passwordless (Recommended)](#tab/passwordless)
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACENAME` with the name of your Service Bus namespace.
+ > - Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
+ > - This sample uses `AZURE_PUBLIC_CLOUD` as the authority host. For supported authority hosts, see [`AzureAuthorityHosts`](/dotnet/api/azure.identity.azureauthorityhosts)
+
+ ```java
+ // handles received messages
+ static void receiveMessages() throws InterruptedException
+ {
+ CountDownLatch countdownLatch = new CountDownLatch(1);
+
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .authorityHost(AzureAuthorityHosts.AZURE_PUBLIC_CLOUD)
+ .build();
+
+ // Create an instance of the processor through the ServiceBusClientBuilder
+ ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder()
+ .fullyQualifiedNamespace("NAMESPACENAME.servicebus.windows.net")
+ .credential(credential)
+ .processor()
+ .topicName(topicName)
+ .subscriptionName(subName)
+ .processMessage(ServiceBusTopicTest::processMessage)
+ .processError(context -> processError(context, countdownLatch))
+ .buildProcessorClient();
+
+ System.out.println("Starting the processor");
+ processorClient.start();
+
+ TimeUnit.SECONDS.sleep(10);
+ System.out.println("Stopping and closing the processor");
+ processorClient.close();
+ }
+ ```
+
+ ### [Connection String](#tab/connection-string)
+ > [!IMPORTANT] > Replace `ServiceBusTopicTest` in `ServiceBusTopicTest::processMessage` in the code with the name of your class.
In this section, you'll add code to retrieve messages from a subscription to the
processorClient.close(); } ```
+
2. Add the `processMessage` method to process a message received from the Service Bus subscription. ```java
In this section, you'll add code to retrieve messages from a subscription to the
## Run the app Run the program to see the output similar to the following output:
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
+1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+
+ 1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine.
+ 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
+
+ ```azurecli
+ az login
+ ```
+1. Run the Jar file using the following command.
+
+ ```java
+ java -jar <JAR FILE NAME>
+ ```
+1. You see the following output in the console window.
+
+ ```console
+ Sent a single message to the topic: mytopic
+ Sent a batch of messages to the topic: mytopic
+ Starting the processor
+ Processing message. Session: e0102f5fbaf646988a2f4b65f7d32385, Sequence #: 1. Contents: Hello, World!
+ Processing message. Session: 3e991e232ca248f2bc332caa8034bed9, Sequence #: 2. Contents: First message
+ Processing message. Session: 56d3a9ea7df446f8a2944ee72cca4ea0, Sequence #: 3. Contents: Second message
+ Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Contents: Third message
+ ```
+### [Connection String](#tab/connection-string)
+When you run the application, you see the following messages in the console window.
+ ```console Sent a single message to the topic: mytopic Sent a batch of messages to the topic: mytopic
Processing message. Session: 3e991e232ca248f2bc332caa8034bed9, Sequence #: 2. Co
Processing message. Session: 56d3a9ea7df446f8a2944ee72cca4ea0, Sequence #: 3. Contents: Second message Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Contents: Third message ```+ On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
If you comment out the `receiveMessages` call in the `main` method and run the a
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/updated-topic-page.png" alt-text="Updated topic page" lightbox="./media/service-bus-java-how-to-use-topics-subscriptions/updated-topic-page.png":::
-On this page, if you select a subscription, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are four active messages that haven't been received by a receiver yet.
+On this page, if you select a subscription, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are four active messages that the receiver hasn't received yet.
:::image type="content" source="./media/service-bus-java-how-to-use-topics-subscriptions/active-message-count.png" alt-text="Active message count" lightbox="./media/service-bus-java-how-to-use-topics-subscriptions/active-message-count.png":::
service-bus-messaging Service Bus Management Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-management-libraries.md
description: This article explains how to dynamically or programmatically provis
Last updated 08/06/2021 ms.devlang: csharp,java,javascript,python+ # Dynamically provision Service Bus namespaces and entities
Service Bus client libraries that are used for operations like send and receive
## Next steps - Send messages to and receive messages from queue using the latest Service Bus library: [.NET](./service-bus-dotnet-get-started-with-queues.md#send-messages-to-the-queue), [Java](./service-bus-java-how-to-use-queues.md), [JavaScript](./service-bus-nodejs-how-to-use-queues.md), [Python](./service-bus-python-how-to-use-queues.md)-- Send messages to topic and receive messages from subscription using the latest Service Bus library: .[NET](./service-bus-dotnet-how-to-use-topics-subscriptions.md), [Java](./service-bus-java-how-to-use-topics-subscriptions.md), [JavaScript](./service-bus-nodejs-how-to-use-topics-subscriptions.md), [Python](./service-bus-python-how-to-use-topics-subscriptions.md)
+- Send messages to topic and receive messages from subscription using the latest Service Bus library: .[NET](./service-bus-dotnet-how-to-use-topics-subscriptions.md), [Java](./service-bus-java-how-to-use-topics-subscriptions.md), [JavaScript](./service-bus-nodejs-how-to-use-topics-subscriptions.md), [Python](./service-bus-python-how-to-use-topics-subscriptions.md)
service-bus-messaging Service Bus Migrate Azure Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-azure-credentials.md
description: Learn to migrate existing Service Bus applications away from connec
Previously updated : 12/07/2022 Last updated : 04/12/2023 -+
+- devx-track-csharp
+- devx-track-azurecli
+- devx-track-azurepowershell
+- passwordless-dotnet
+- passwordless-java
+- passwordless-js
+- passwordless-python
# Migrate an application to use passwordless connections with Azure Service Bus
Application requests to Azure Service Bus must be authenticated using either acc
The following code example demonstrates how to connect to Azure Service Bus using a connection string that includes an access key. When you create a Service Bus, Azure generates these keys and connection strings automatically. Many developers gravitate towards this solution because it feels familiar to options they've worked with in the past. If your application currently uses connection strings, consider migrating to passwordless connections using the steps described in this document.
+## [.NET](#tab/dotnet)
+ ```csharp
-var serviceBusClient = new ServiceBusClient(
- "<NAMESPACE-CONNECTION-STRING>",
- clientOptions);
+await using var client = new ServiceBusClient("<CONNECTION-STRING>");
+```
+
+## [Java](#tab/java)
+
+**JMS:**
+
+```java
+ConnectionFactory factory = new ServiceBusJmsConnectionFactory(
+ "<CONNECTION-STRING>",
+ new ServiceBusJmsConnectionFactorySettings());
+```
+
+**Receiver client:**
+
+```java
+ServiceBusReceiverClient receiver = new ServiceBusClientBuilder()
+ .connectionString("<CONNECTION-STRING>")
+ .receiver()
+ .topicName("<TOPIC-NAME>")
+ .subscriptionName("<SUBSCRIPTION-NAME>")
+ .buildClient();
+```
+
+**Sender client:**
+
+```java
+ServiceBusSenderClient client = new ServiceBusClientBuilder()
+ .connectionString("<CONNECTION-STRING>")
+ .sender()
+ .queueName("<QUEUE-NAME>")
+ .buildClient();
+```
+
+## [Node.js](#tab/nodejs)
+
+```nodejs
+const client = new ServiceBusClient("<CONNECTION-STRING>");
+```
+
+## [Python](#tab/python)
+
+```python
+client = ServiceBusClient(
+ fully_qualified_namespace = "<CONNECTION-STRING>"
+)
``` ++ Connection strings should be used with caution. Developers must be diligent to never expose the keys in an unsecure location. Anyone who gains access to the key is able to authenticate. For example, if an account key is accidentally checked into source control, sent through an unsecure email, pasted into the wrong chat, or viewed by someone who shouldn't have permission, there's risk of a malicious user accessing the application. Instead, consider updating your application to use passwordless connections. ## Migrate to passwordless connections
For local development, make sure you're authenticated with the same Azure AD acc
[!INCLUDE [default-azure-credential-sign-in](../../includes/passwordless/default-azure-credential-sign-in.md)]
-Next you'll need to update your code to use passwordless connections.
+Next, update your code to use passwordless connections.
+
+## [.NET](#tab/dotnet)
-1. To use `DefaultAzureCredential` in a .NET application, add the **Azure.Identity** NuGet package to your application.
+1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package:
```dotnetcli dotnet add package Azure.Identity ```
-1. At the top of your `Program.cs` file, add the following `using` statement:
+1. At the top of your file, add the following code:
```csharp using Azure.Identity; ```
-1. Identify the locations in your code that currently create a `ServiceBusClient` to connect to Azure Service Bus. This task is often handled in `Program.cs`, potentially as part of your service registration with the .NET dependency injection container. Update your code to match the following example:
+1. Identify the code that creates a `ServiceBusClient` object to connect to Azure Service Bus. Update your code to match the following example:
```csharp
- var clientOptions = new ServiceBusClientOptions
- {
- TransportType = ServiceBusTransportType.AmqpWebSockets
- };
-
- //TODO: Replace the "<SERVICE-BUS-NAMESPACE-NAME>" placeholder.
- client = new ServiceBusClient(
+ // TODO: Replace the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ var client = new ServiceBusClient(
"<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(),
- clientOptions);
+ new DefaultAzureCredential());
```
-1. Make sure to update the Service Bus namespace in the URI of your `ServiceBusClient`. You can find the namespace on the overview page of the Azure portal.
+## [Java](#tab/java)
+
+1. To use `DefaultAzureCredential`:
+ - In a JMS application, add at least version 1.0.0 of the `azure-servicebus-jms` package to your application:
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>azure-servicebus-jms</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+ - In a Java application, install the `azure-identity` package via one of the following approaches:
+ - [Include the BOM file](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-the-bom-file).
+ - [Include a direct dependency](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-direct-dependency).
+
+1. At the top of your file, add the following code:
+
+ ```java
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ ```
+
+1. Update the code that connects to Azure Service Bus:
+ - In a JMS application, identify the code that creates a `ServiceBusJmsConnectionFactory` object to connect to Azure Service Bus. Update your code to match the following example:
+
+ ```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .build();
+
+ // TODO: Replace the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ ConnectionFactory factory = new ServiceBusJmsConnectionFactory(
+ credential,
+ "<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net",
+ new ServiceBusJmsConnectionFactorySettings());
+ ```
+
+ - In a Java application, identify the code that creates a Service Bus sender or receiver client object to connect to Azure Service Bus. Update your code to match one of the following examples:
+
+ **Receiver client:**
+
+ ```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .build();
+
+ // TODO: Update the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ ServiceBusReceiverClient receiver = new ServiceBusClientBuilder()
+ .credential("<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net", credential)
+ .receiver()
+ .topicName("<TOPIC-NAME>")
+ .subscriptionName("<SUBSCRIPTION-NAME>")
+ .buildClient();
+ ```
+
+ **Sender client:**
+
+ ```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .build();
+
+ // TODO: Update the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ ServiceBusSenderClient client = new ServiceBusClientBuilder()
+ .credential("<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net", credential)
+ .sender()
+ .queueName("<QUEUE-NAME>")
+ .buildClient();
+ ```
+
+## [Node.js](#tab/nodejs)
+
+1. To use `DefaultAzureCredential` in a Node.js application, install the `@azure/identity` package:
+
+ ```bash
+ npm install --save @azure/identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```nodejs
+ const { DefaultAzureCredential } = require("@azure/identity");
+ ```
+
+1. Identify the code that creates a `ServiceBusClient` object to connect to Azure Service Bus. Update your code to match the following example:
+
+ ```nodejs
+ const credential = new DefaultAzureCredential();
+
+ // TODO: Update the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ const client = new ServiceBusClient(
+ "<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net",
+ credential
+ );
+ ```
+
+## [Python](#tab/python)
+
+1. To use `DefaultAzureCredential` in a Python application, install the `azure-identity` package:
+
+ ```bash
+ pip install azure-identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Identify the code that creates a `ServiceBusClient` object to connect to Azure Service Bus. Update your code to match the following example:
+
+ ```python
+ credential = DefaultAzureCredential()
+
+ # TODO: Update the <SERVICE-BUS-NAMESPACE-NAME> placeholder.
+ client = ServiceBusClient(
+ fully_qualified_namespace = "<SERVICE-BUS-NAMESPACE-NAME>.servicebus.windows.net",
+ credential = credential
+ )
+ ```
++ #### Run the app locally
You can assign a managed identity to an Azure Kubernetes Service (AKS) instance
```azurecli az aks update \ --resource-group <resource-group-name> \
- --name <virtual-machine-name>
+ --name <virtual-machine-name> \
--enable-managed-identity ```
If you connected your services using the Service Connector you don't need to com
### [Azure CLI](#tab/assign-role-azure-cli)
-To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the `az servicebus show` command. You can filter the output properties using the --query parameter.
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the `az servicebus show` command. You can filter the output properties using the `--query` parameter.
```azurecli az servicebus show \
service-bus-messaging Service Bus Resource Manager Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-exceptions.md
Title: Azure Service Bus Resource Manager exceptions | Microsoft Docs description: List of Service Bus exceptions surfaced by Azure Resource Manager and suggested actions. + Last updated 10/25/2022
service-bus-messaging Service Bus Resource Manager Namespace Auth Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-auth-rule.md
dotnet Last updated 09/27/2021 -+ ms.devlang: azurecli
service-bus-messaging Service Bus Resource Manager Namespace Queue Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue-bicep.md
Last updated 08/24/2022 dotnet-+ # Quickstart: Create a Service Bus namespace and a queue using a Bicep file
service-bus-messaging Service Bus Resource Manager Namespace Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-queue.md
Last updated 08/25/2022 dotnet-+ # Quickstart: Create a Service Bus namespace and a queue using an ARM template
service-bus-messaging Service Bus Resource Manager Namespace Topic With Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-topic-with-rule.md
dotnet Last updated 09/27/2021 -+ ms.devlang: azurecli
service-bus-messaging Service Bus Resource Manager Namespace Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace-topic.md
Last updated 09/27/2021 dotnet-+ ms.devlang: azurecli
service-bus-messaging Service Bus Resource Manager Namespace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-namespace.md
documentationcenter: .net
dotnet+ Last updated 09/27/2021
service-bus-messaging Service Bus Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-resource-manager-overview.md
documentationcenter: .net
dotnet+ Last updated 09/20/2021
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
Title: Configure virtual network service endpoints for Azure Service Bus
description: This article provides information on how to add a Microsoft.ServiceBus service endpoint to a virtual network. Last updated 02/16/2023-+ # Allow access to Azure Service Bus namespace from specific virtual networks
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
-+ Last updated 09/26/2022
service-connector How To Troubleshoot Front End Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-troubleshoot-front-end-error.md
Last updated 5/25/2022--- ignite-fall-2021-- kr2b-contr-experiment-- event-tier1-build-2022+ # How to troubleshoot with Service Connector
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
Previously updated : 09/15/2022 Last updated : 04/13/2023 ms.devlang: azurecli-+ # Quickstart: Create a service connection in App Service with the Azure CLI
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
- This quickstart assumes that you already have at least an App Service running on Azure. If you don't have an App Service, [create one](../app-service/quickstart-dotnetcore.md).
-## View supported target service types
+## Initial set-up
-Use the Azure CLI [az webapp connection list](/cli/azure/webapp/connection#az-webapp-connection-list) command to get a list of supported target services for App Service.
+1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
-```azurecli-interactive
-az provider register -n Microsoft.ServiceLinker
-az webapp connection list-support-types --output table
-```
+ ```azurecli
+ az provider register -n Microsoft.ServiceLinker
+ ```
+
+ > [!TIP]
+ > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered.
++
+1. Optionally, use the Azure CLI [az webapp connection list-support-types](/cli/azure/webapp/connection#az-webapp-connection-list-support-types) command to get a list of supported target services for App Service.
+ ```azurecli
+ az webapp connection list-support-types --output table
+ ```
+
## Create a service connection
-#### [Using Access Key](#tab/Using-access-key)
+#### [Using an access key](#tab/Using-access-key)
Use the Azure CLI [az webapp connection create](/cli/azure/webapp/connection/create) command to create a service connection to an Azure Blob Storage with an access key, providing the following information:
Use the Azure CLI [az webapp connection create](/cli/azure/webapp/connection/cre
- **Target service resource group name:** the resource group name of the Blob Storage. - **Storage account name:** the account name of your Blob Storage.
-```azurecli-interactive
+```azurecli
az webapp connection create storage-blob --secret ``` > [!NOTE] > If you don't have a Blob Storage, you can run `az webapp connection create storage-blob --new --secret` to provision a new one and directly get connected to your app service.
-#### [Using Managed Identity](#tab/Using-Managed-Identity)
+#### [Using a managed identity](#tab/Using-Managed-Identity)
> [!IMPORTANT] > Using Managed Identity requires you have the permission to [Azure AD role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have the permission, your connection creation would fail. You can ask your subscription owner for the permission or using access key to create the connection.
Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command t
- **Target service resource group name:** the resource group name of the Blob Storage. - **Storage account name:** the account name of your Blob Storage.
-```azurecli-interactive
+```azurecli
az webapp connection create storage-blob --system-identity ```
Use the Azure CLI [az webapp connection](/cli/azure/webapp/connection) command t
- **Source compute service resource group name:** the resource group name of the App Service. - **App Service name:** the name of your App Service that connects to the target service.
-```azurecli-interactive
+```azurecli
az webapp connection list -g "<your-app-service-resource-group>" -n "<your-app-service-name>" --output table ```
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
Previously updated : 08/09/2022 Last updated : 04/13/2023 ms.devlang: azurecli+ # Quickstart: Create a service connection in Container Apps with the Azure CLI
This quickstart shows you how to connect Azure Container Apps to other Cloud res
- The Container Apps extension must be installed in the Azure CLI or the Cloud Shell. To install it, run `az extension add --name containerapp`.
-## Prepare to create a connection
+## Initial set-up
-1. Run the command [az provider register](/cli/azure/provider#az-provider-register) to start using Service Connector.
+1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
- ```azurecli-interactive
+ ```azurecli
az provider register -n Microsoft.ServiceLinker ```
-1. Run the command `az containerapp connection` to get a list of supported target services for Container Apps.
+ > [!TIP]
+ > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered.
- ```azurecli-interactive
+1. Optionally, run the command [az containerapp connection list-support-types](/cli/azure/containerapp/connection#az-containerapp-connection-list-support-types) to get a list of supported target services for Container Apps.
+
+ ```azurecli
az containerapp connection list-support-types --output table ```
You can create a connection using an access key or a managed identity.
1. Run the `az containerapp connection create` command to create a service connection between Container Apps and Azure Blob Storage with an access key.
- ```azurecli-interactive
+ ```azurecli
az containerapp connection create storage-blob --secret ```
You can create a connection using an access key or a managed identity.
1. Run the `az containerapp connection create` command to create a service connection from Container Apps to a Blob Storage with a system-assigned managed identity.
- ```azurecli-interactive
+ ```azurecli
az containerapp connection create storage-blob --system-identity ```
You can create a connection using an access key or a managed identity.
Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Replace the placeholders `<container-app-resource-group>` and `<container-app-name>` from the command below with the resource group and name of your container app. You can also remove the `--output table` option to view more information about your connections.
-```azurecli-interactive
+```azurecli
az containerapp connection list -g "<container-app-resource-group>" --name "<container-app-name>" --output table ```
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Previously updated : 08/09/2022 Last updated : 04/13/2022 ms.devlang: azurecli
Service Connector lets you quickly connect compute services to cloud services, w
- The Azure Spring Apps extension must be installed in the Azure CLI or the Cloud Shell. To install it, run `az extension add --name spring`.
-## Prepare to create a connection
+## Initial set up
1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
- ```azurecli-interactive
+ ```azurecli
az provider register -n Microsoft.ServiceLinker ```
-1. Run the command `az spring connection` to get a list of supported target services for Azure Spring Apps.
+ > [!TIP]
+ > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered.
- ```azurecli-interactive
+
+1. Optionally, run the command [az spring connection list-support-types](/cli/azure/spring/connection#az-spring-connection-list-support-types) to get a list of supported target services for Azure Spring Apps.
+
+ ```azurecli
az spring connection list-support-types --output table ```
You can create a connection from Azure Spring Apps using an access key or a mana
1. Run the `az spring connection create` command to create a service connection between Azure Spring Apps and an Azure Blob Storage with an access key.
- ```azurecli-interactive
+ ```azurecli
az spring connection create storage-blob --secret ```
service-fabric Create Load Balancer Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/create-load-balancer-rule.md
+ Last updated 07/11/2022
service-fabric How To Deploy Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-deploy-custom-image.md
+ Last updated 09/15/2022
service-fabric How To Managed Cluster App Deployment Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-app-deployment-template.md
+ Last updated 07/11/2022
service-fabric How To Managed Cluster Application Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-managed-identity.md
+ Last updated 07/11/2022
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Last updated 11/09/2022-+ # Deploy a Service Fabric managed cluster across availability zones Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
service-fabric How To Managed Cluster Enable Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-enable-disk-encryption.md
-+ Last updated 07/11/2022
service-fabric How To Managed Cluster Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-managed-disk.md
+ Last updated 07/11/2022
service-fabric Quickstart Cluster Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-bicep.md
Last updated 06/22/2022 -+ # Quickstart: Create a Service Fabric cluster using Bicep
service-fabric Quickstart Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-cluster-template.md
+ Last updated 07/11/2022
service-fabric Quickstart Managed Cluster Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/quickstart-managed-cluster-template.md
+ Last updated 07/14/2022
service-fabric Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/samples-cli.md
+ Last updated 07/14/2022
service-fabric Service Fabric Powershell Add Nsg Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md
-
+ Title: Add a network security group rule in PowerShell description: Azure PowerShell Script Sample - Adds a network security group to allow inbound traffic on a specific port.
Last updated 11/28/2017 -+ # Add an inbound network security group rule
service-fabric Service Fabric Powershell Change Rdp Port Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md
-
+ Title: Azure PowerShell Script Sample - Change the RDP port range | Microsoft Docs description: Azure PowerShell Script Sample - Changes the RDP port range of a deployed cluster. tags: azure-service-management+
service-fabric Service Fabric Powershell Change Rdp User And Pw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md
Last updated 03/19/2018 -+ # Update the admin username and password of the VMs in a cluster
service-fabric Service Fabric Powershell Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-deploy-application.md
Last updated 01/18/2018 -+ # Deploy an application to a Service Fabric cluster
service-fabric Service Fabric Powershell Open Port In Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md
Last updated 05/18/2018 -+ # Open an application port in the Azure load balancer
service-fabric Service Fabric Powershell Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-remove-application.md
Last updated 01/18/2018 -+ # Remove an application from a Service Fabric cluster using PowerShell
service-fabric Service Fabric Powershell Upgrade Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-upgrade-application.md
Last updated 01/18/2018 -+ # Upgrade a Service Fabric application
service-fabric Service Fabric Application Arm Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-arm-resource.md
+ Last updated 07/14/2022
service-fabric Service Fabric Azure Resource Manager Guardrails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-azure-resource-manager-guardrails.md
+ Last updated 07/14/2022
service-fabric Service Fabric Cluster Change Cert Thumbprint To Cn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-change-cert-thumbprint-to-cn.md
+ Last updated 07/14/2022
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
-+ Last updated 07/14/2022
service-fabric Service Fabric Cluster Security Update Certs Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-security-update-certs-azure.md
+ Last updated 07/14/2022
Read these articles for more information on cluster management:
[Json_Pub_Setting3]: ./media/service-fabric-cluster-security-update-certs-azure/SecurityConfigurations_16.PNG [Json_Pub_Setting4]: ./media/service-fabric-cluster-security-update-certs-azure/SecurityConfigurations_17.PNG [Json_Pub_Setting5]: ./media/service-fabric-cluster-security-update-certs-azure/SecurityConfigurations_18.PNG--
service-fabric Service Fabric Concept Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-concept-resource-model.md
+ Last updated 07/14/2022
Get information about the application resource model:
[CreateBlob]: ./media/service-fabric-application-model/create-blob.png [PackageApplication]: ./media/service-fabric-application-model/package-application.png [ZipApplication]: ./media/service-fabric-application-model/zip-application.png
-[UploadAppPkg]: ./media/service-fabric-application-model/upload-app-pkg.png
+[UploadAppPkg]: ./media/service-fabric-application-model/upload-app-pkg.png
service-fabric Service Fabric Diagnostics Event Aggregation Wad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-event-aggregation-wad.md
+ Last updated 07/14/2022
service-fabric Service Fabric Diagnostics Oms Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-agent.md
+ Last updated 07/14/2022
service-fabric Service Fabric Diagnostics Oms Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-diagnostics-oms-setup.md
+ Last updated 07/14/2022
service-fabric Service Fabric Enable Azure Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-linux.md
-+ Last updated 07/14/2022
service-fabric Service Fabric Enable Azure Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-enable-azure-disk-encryption-windows.md
-+ Last updated 07/14/2022
service-fabric Service Fabric Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-powershell-samples.md
+ Last updated 07/11/2022
service-fabric Service Fabric Reverseproxy Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reverseproxy-setup.md
+ Last updated 07/11/2022
Several fabric settings are used to help establish secure communication between
## Next steps * [Set up forwarding to secure HTTP service with the reverse proxy](service-fabric-reverseproxy-configure-secure-communication.md)
-* For reverse proxy configuration options, see [ApplicationGateway/Http section in Customize Service Fabric cluster settings](service-fabric-cluster-fabric-settings.md#applicationgatewayhttp).
+* For reverse proxy configuration options, see [ApplicationGateway/Http section in Customize Service Fabric cluster settings](service-fabric-cluster-fabric-settings.md#applicationgatewayhttp).
service-fabric Service Fabric Tutorial Create Vnet And Linux Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md
+ Last updated 07/14/2022
service-health Alerts Activity Log Service Notifications Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-arm.md
Title: Receive activity log alerts on Azure service notifications using Resource
description: Get notified via SMS, email, or webhook when Azure service occurs. Last updated 05/13/2022 -+ # Quickstart: Create activity log alerts on service notifications using an ARM template
service-health Alerts Activity Log Service Notifications Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/alerts-activity-log-service-notifications-bicep.md
Title: Receive activity log alerts on Azure service notifications using Bicep
description: Get notified via SMS, email, or webhook when Azure service occurs. Last updated 05/13/2022 -+ # Quickstart: Create activity log alerts on service notifications using a Bicep file
site-recovery Asr Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/asr-arm-templates.md
Last updated 02/18/2021 -+ # Azure Resource Manager templates for Azure Site Recovery
The following table includes links to Azure Resource Manager templates for using
| [Enable Replication for Azure VMs](https://aka.ms/asr-arm-enable-replication) | Enable replication for Azure VMs using the existing Vault and custom Target Settings.| | [Trigger Failover and Reprotect](https://aka.ms/asr-arm-failover-reprotect) | Trigger a Failover and Reprotect operation for a set of Azure VMs. | | [Run an End to End DR Flow for Azure VMs](https://aka.ms/asr-arm-e2e-flow) | Start a complete End to End Disaster Recovery Flow (Enable Replication + Failover and Reprotect + Failback and Reprotect) for Azure VMs, also called as 540┬░ flow.|
-| | |
+| | |
site-recovery Hyper V Azure Powershell Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-powershell-resource-manager.md
Title: Hyper-V VM disaster recovery using Azure Site Recovery and PowerShell
description: Automate disaster recovery of Hyper-V VMs to Azure with the Azure Site Recovery service using PowerShell and Azure Resource Manager. -+ Last updated 01/10/2020
site-recovery Quickstart Create Vault Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-bicep.md
Last updated 06/27/2022 -+ # Quickstart: Create a Recovery Services vault using Bicep
site-recovery Quickstart Create Vault Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-create-vault-template.md
Title: Quickstart to create an Azure Recovery Services vault using an Azure Reso
description: In this quickstart, you learn how to create an Azure Recovery Services vault using an Azure Resource Manager template (ARM template). Last updated 09/21/2022 -+
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Guest/server disk with 4K logical and 512-bytes physical sector size | No
Guest/server volume with striped disk >4 TB | Yes Logical volume management (LVM)| Thick provisioning - Yes <br></br> Thin provisioning - Yes, it is supported from [Update Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) onwards. It wasn't supported in earlier Mobility service versions. Guest/server - Storage Spaces | No
-Guest/server - NVMe interface | No
+Guest/server - NVMe interface | Yes
Guest/server hot add/remove disk | No Guest/server - exclude disk | Yes Guest/server multipath (MPIO) | No
spring-apps Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/breaking-changes.md
Last updated 05/25/2022-+ # Azure Spring Apps API breaking changes
spring-apps Concept Outbound Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-outbound-type.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article describes how to customize an instance's egress route to support custom network scenarios. For example, you might want to customize an instance's egress route for networks that disallow public IPs and require the instance to sit behind a network virtual appliance (NVA).
-By default, Azure Spring Apps provisions a Standard SKU Load Balancer that you can set up and use for egress. However, the default setup may not meet the requirements of all scenarios. For example, public IPs may not be allowed, or more hops may be required for egress.
+By default, Azure Spring Apps provisions a Standard SKU Load Balancer that you can set up and use for egress. However, the default setup may not meet the requirements of all scenarios. For example, public IPs may not be allowed, or more hops may be required for egress. When you use this feature to customize egress, Azure Spring Apps doesn't create public IP resources.
## Prerequisites - All prerequisites for deploying Azure Spring Apps in a virtual network. For more information, see [Deploy Azure Spring Apps in a virtual network](how-to-deploy-in-azure-virtual-network.md).-- An API version of *2022-09-01 preview* or greater.
+- An API version of `2022-09-01 preview` or greater.
- [Azure CLI version 1.1.7 or later](/cli/azure/install-azure-cli). ## Limitations
The default `outboundType` value is `loadBalancer`. If `outboundType` is set to
> [!NOTE] > Using an outbound type is an advanced networking scenario and requires proper network configuration.
-If `outboundType` is set to `userDefinedRouting`, Azure Spring Apps won't automatically configure egress paths. You must set up egress paths yourself. You could still find two load balancers in your resource group. They're only used for internal traffic and won't expose any public IP. You must prepare two route tables associated with two subnets: one to service the runtime and another for the user app.
+If `outboundType` is set to `userDefinedRouting`, Azure Spring Apps doesn't automatically configure egress paths. You must set up egress paths yourself. You could still find two load balancers in your resource group. They're only used for internal traffic and don't expose any public IP. You must prepare two route tables associated with two subnets: one to service the runtime and another for the user app.
> [!IMPORTANT] > An `outboundType` of `userDefinedRouting` requires a route for `0.0.0.0/0` and the next hop destination of a network virtual appliance in the route table. For more information, see [Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md).
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
Instead of manually configuring your Spring Boot applications, you can automatic
## Prerequisites * A deployed Azure Spring Apps instance.
-* An Azure Cache for Redis service instance.
+* An Azure Cosmos DB account and a database.
* The Azure Spring Apps extension for the Azure CLI. If you don't have a deployed Azure Spring Apps instance, follow the steps in the [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-starter-data-cosmos</artifactId>
- <version>4.3.0</version>
+ <version>4.7.0</version>
</dependency> ```
If you don't have a deployed Azure Spring Apps instance, follow the steps in the
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-starter-storage-blob</artifactId>
- <version>4.3.0</version>
+ <version>4.7.0</version>
</dependency> ```
spring-apps How To Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-cicd.md
Last updated 09/13/2021 -+ zone_pivot_groups: programming-languages-spring-apps
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article describes how to secure outbound traffic from your applications hosted in Azure Spring Apps. The article provides an example of a user-defined route. A user-defined route is an advanced feature that lets you fully control egress traffic. You can use a user-defined route in scenarios such as disallowing an Azure Spring Apps autogenerated public IP address.
The following illustration shows an example of an Azure Spring Apps virtual netw
This diagram illustrates the following features of the architecture:
-* Public ingress traffic must flow through firewall filters.
-* Each Azure Spring Apps instance is isolated within a dedicated subnet.
-* The firewall is owned and managed by customers.
-* This structure ensures that the firewall enables a healthy environment for all the functions you need.
+- Public ingress traffic must flow through firewall filters.
+- Each Azure Spring Apps instance is isolated within a dedicated subnet.
+- Customers own and manage the firewall.
+- This structure ensures that the firewall enables a healthy environment for all the functions you need.
+- Azure Spring Apps doesn't automatically generate public IP resources.
### Define environment variables
az network vnet subnet create \
Use the following command to create and set up an Azure Firewall instance with a user-defined route, and to configure Azure Firewall outbound rules. The firewall lets you configure granular egress traffic rules from Azure Spring Apps. > [!IMPORTANT]
-> If your cluster or application creates a large number of outbound connections directed to the same destination or to a small subset of destinations, you might require more firewall front-end IP addresses to avoid reaching the maximum ports per front-end IP address. For more information on how to create an Azure Firewall instance with multiple IP addresses, see [Quickstart: Create an Azure Firewall instance with multiple public IP addresses - ARM template](../firewall/quick-create-multiple-ip-template.md). Create a Standard SKU public IP resource that will be used as the Azure Firewall front-end address.
+> If your cluster or application creates a large number of outbound connections directed to the same destination or to a small subset of destinations, you might require more firewall front-end IP addresses to avoid reaching the maximum ports per front-end IP address. For more information on how to create an Azure Firewall instance with multiple IP addresses, see [Quickstart: Create an Azure Firewall instance with multiple public IP addresses - ARM template](../firewall/quick-create-multiple-ip-template.md). Create a Standard SKU public IP resource for use as the Azure Firewall front-end address.
```azurecli az network public-ip create \
az network firewall create \
The following example shows how to assign the IP address that you created to the firewall front end. > [!NOTE]
-> Setting up the public IP address to the Azure Firewall instance might take a few minutes. To use a fully qualified domain name (FQDN) on network rules, enable a DNS proxy. After you enable the proxy, the firewall will listen on port 53 and forward DNS requests to the specified DNS server. The firewall can then translate the FQDN automatically.
+> Setting up the public IP address to the Azure Firewall instance might take a few minutes. To use a fully qualified domain name (FQDN) on network rules, enable a DNS proxy. After you enable the proxy, the firewall listens on port 53 and forwards DNS requests to the specified DNS server. The firewall can then translate the FQDN automatically.
```azurecli # Configure the firewall IP address.
az spring create \
--outbound-type userDefinedRouting ```
-You can now access the public IP address of the firewall from the internet. The firewall will route traffic into Azure Spring Apps subnets according to your routing rules.
+You can now access the public IP address of the firewall from the internet. The firewall routes traffic into Azure Spring Apps subnets according to your routing rules.
## Next steps
spring-apps How To Custom Persistent Storage With Standard Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage-with-standard-consumption.md
You can also mount your own persistent storage not only to Azure Spring Apps but
## Prerequisites - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
- An Azure Spring Apps Standard consumption plan service instance. For more information, see [Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md). - A Spring app deployed to Azure Spring Apps. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps](quickstart-deploy-apps.md).
spring-apps How To Custom Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-persistent-storage.md
When you use your own persistent storage, artifacts generated by your applicatio
## Prerequisites - An existing Azure Storage Account and a pre-created Azure File Share. If you need to create a storage account and file share in Azure, see [Create an SMB Azure file share](../storage/files/storage-how-to-create-file-share.md).-- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher.
> [!IMPORTANT] > If you deployed your Azure Spring Apps in your own virtual network and you want the storage account to be accessed only from the virtual network, see [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md) and the [Grant access from a virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) section of [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
spring-apps How To Enable System Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-system-assigned-managed-identity.md
If you're unfamiliar with managed identities for Azure resources, see the [Manag
::: zone pivot="sc-enterprise" - An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).-- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] ::: zone-end
If you're unfamiliar with managed identities for Azure resources, see the [Manag
::: zone pivot="sc-standard" - An already provisioned Azure Spring Apps instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).-- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] ::: zone-end
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
This article shows you how to deploy polyglot apps in Azure Spring Apps Enterpri
## Prerequisites - An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).-- [Azure CLI](/cli/azure/install-azure-cli), version 2.43.0 or higher.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher.
## Deploy a polyglot application
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
This article shows you how to deploy your static files to Azure Spring Apps Enterprise tier using the Tanzu Web Servers buildpack. This approach is useful if you have applications that are purely for holding static files like HTML, CSS, or front-end applications built with the JavaScript framework of your choice. You can directly deploy these applications with an automatically configured web server (HTTPD and NGINX) to serve those assets.
This article shows you how to deploy your static files to Azure Spring Apps Ente
- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md). - One or more applications running in Azure Spring Apps. For more information on creating apps, see [How to Deploy Spring Boot applications from Azure CLI](./how-to-launch-from-source.md).-- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher.-- Your static files or dynamic front-end application.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.45.0 or higher.
+- Your static files or dynamic front-end application - for example, a React app.
## Deploy your static files You can deploy static files to Azure Spring Apps using NGINX or HTTPD web servers in the following ways: - You can deploy static files directly. Azure Spring Apps automatically configures the specified web server to serve the static files.-- You can create your front-end application in the JavaScript framework of your choice, and then deploy your dynamic front-end application as static content.-- You can create a server configuration file to customize the web server.
+- You can create your front-end application in the JavaScript framework of your choice, and then deploy your dynamic front-end application from source code. Azure Spring Apps builds your app into static content and uses your configured web server to serve the static files.
+
+You can also create a server configuration file to customize the web server.
### Deploy static files directly
-Use the following command to deploy static files directly using an auto-generated default server configuration file.
+Use the following command to deploy static files directly using an autogenerated default server configuration file.
```azurecli az spring app deploy
az spring app deploy
--build-env BP_WEB_SERVER=nginx ```
-For more information, see the [Configure an auto-generated server configuration file](#configure-an-auto-generated-server-configuration-file) section of this article.
+For more information, see the [Configure an autogenerated server configuration file](#configure-an-autogenerated-server-configuration-file) section of this article.
### Deploy your front-end application as static content
-Use the following command to deploy a dynamic front-end application as static content.
+Use the following command to deploy a dynamic front-end application from source code.
```azurecli az spring app deploy
For more information, see the [Using a customized server configuration file](#us
The [Paketo buildpacks samples](https://github.com/paketo-buildpacks/samples/tree/main/web-servers) demonstrate common use cases for several different application types, including the following use cases: - Serving static files with a default server configuration file using `BP_WEB_SERVER` to select either [HTTPD](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/HTTPD.md) or [NGINX](https://github.com/paketo-buildpacks/samples/blob/main/web-servers/no-config-file-sample/NGINX.md).-- Using Node Package Manager to build a [React app](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/javascript-frontend-sample) into static files that can be served by a web server. Use the following steps:
+- Using Node Package Manager to build a [React app](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/javascript-frontend-sample) into static files that a web server can serve. Use the following steps:
1. Define a script under the `scripts` property of the *package.json* file that builds your production-ready static assets. For React, it's `build`. 1. Find out where static assets are stored after the build script runs. For React, static assets are stored in `./build` by default. 1. Set `BP_NODE_RUN_SCRIPTS` to the name of the build script. 1. Set `BP_WEB_SERVER_ROOT` to the build output directory. - Serving static files with your own server configuration file, using either [HTTPD](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/httpd-sample) or [NGINX](https://github.com/paketo-buildpacks/samples/tree/main/web-servers/nginx-sample).
-## Configure an auto-generated server configuration file
+## Configure an autogenerated server configuration file
-You can use environment variables to modify the auto-generated server configuration file. The following table shows supported environment variables.
+You can use environment variables to modify the autogenerated server configuration file. The following table shows supported environment variables.
| Environment Variable | Supported Value | Description | |--|-||
-| `BP_WEB_SERVER` | *nginx* or *httpd* | Specifies the web server type, either *nginx* for Nginx or *httpd* for Apache HTTP server. Required when using the auto-generated server configuration file. |
+| `BP_WEB_SERVER` | *nginx* or *httpd* | Specifies the web server type, either *nginx* for Nginx or *httpd* for Apache HTTP server. Required when using the autogenerated server configuration file. |
| `BP_WEB_SERVER_ROOT` | An absolute file path or a file path relative to */workspace*. | Sets the root directory for the static files. The default is `public`. | | `BP_WEB_SERVER_ENABLE_PUSH_STATE` | *true* or *false* | Enables push state routing for your application. Regardless of the route that is requested, *https://docsupdatetracker.net/index.html* is always served. Useful for single-page web applications. | | `BP_WEB_SERVER_FORCE_HTTPS` | *true* or *false* | Enforces HTTPS for server connections by redirecting all requests to use the HTTPS protocol. |
spring-apps How To Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-manage-user-assigned-managed-identities.md
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-enterprise" - An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).-- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] - At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
Managed identities for Azure resources provide an automatically managed identity
::: zone pivot="sc-standard" - An already provisioned Azure Spring Apps instance. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).-- [Azure CLI version 2.30.0 or higher](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [!INCLUDE [install-app-user-identity-extension](includes/install-app-user-identity-extension.md)] - At least one already provisioned user-assigned managed identity. For more information, see [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
spring-apps How To Maven Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-maven-deploy-apps.md
This article shows you how to use the Azure Spring Apps Maven plugin to configur
* An already provisioned Azure Spring Apps instance. * [JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install) * [Apache Maven](https://maven.apache.org/download.cgi)
-* [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. You can install the extension by using the following command: `az extension add --name spring`
+* [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. You can install the extension by using the following command: `az extension add --name spring`
## Generate a Spring project
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
It takes about 5 minutes to finish the resource provisioning.
1. Update Azure CLI with the Azure Spring Apps extension by using the following command: ```azurecli
- az extension update --name spring-cloud
+ az extension add --upgrade --name spring
``` 1. Sign in to the Azure CLI and choose your active subscription by using the following command:
It takes about 5 minutes to finish the resource provisioning.
```azurecli az group create --name <resource-group-name>
- az spring-cloud create \
+ az spring create \
--resource-group <resource-group-name> \ --name <service-instance-name> \ --sku enterprise
It takes about 5 minutes to finish the resource provisioning.
1. Set your default resource group name and Spring Cloud service name using the following command: ```azurecli
- az config set defaults.group=<resource-group-name> defaults.spring-cloud=<service-instance-name>
+ az config set defaults.group=<resource-group-name> defaults.spring=<service-instance-name>
```
The app creation steps are the same as Standard Tier.
```azurecli az account set --subscription=<your-subscription-id>
- az configure --defaults group=<your-resource-group-name> spring-cloud=<your-service-name>
+ az configure --defaults group=<your-resource-group-name> spring=<your-service-name>
``` 1. To create the two core applications for PetClinic, `api-gateway` and `customers-service`, use the following commands: ```azurecli
- az spring-cloud app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
- az spring-cloud app create --name customers-service --instance-count 1 --memory 2Gi
+ az spring app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
+ az spring app create --name customers-service --instance-count 1 --memory 2Gi
``` ## Use Application Configuration Service for external configuration
Follow these steps to use Application Configuration Service for Tanzu as a centr
To set the default repository, use the following command: ```azurecli
-az spring-cloud application-configuration-service git repo add \
+az spring application-configuration-service git repo add \
--name default \ --patterns api-gateway,customers-service \ --uri https://github.com/Azure-Samples/spring-petclinic-microservices-config.git \
The list under **App name** will show the apps bound with Application Configurat
To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands: ```azurecli
-az spring-cloud application-configuration-service bind --app api-gateway
-az spring-cloud application-configuration-service bind --app customers-service
+az spring application-configuration-service bind --app api-gateway
+az spring application-configuration-service bind --app customers-service
```
The list under **App name** shows the apps bound with Tanzu Service Registry.
To bind apps to Application Configuration Service for VMware Tanzu® and VMware Tanzu® Service Registry, use the following commands: ```azurecli
-az spring-cloud service-registry bind --app api-gateway
-az spring-cloud service-registry bind --app customers-service
+az spring service-registry bind --app api-gateway
+az spring service-registry bind --app customers-service
```
To build locally, use the following steps:
1. Deploy the JAR files built in the previous step using the following commands: ```azurecli
- az spring-cloud app deploy \
+ az spring app deploy \
--name api-gateway \ --artifact-path spring-petclinic-api-gateway/target/spring-petclinic-api-gateway-2.3.6.jar \ --config-file-patterns api-gateway
- az spring-cloud app deploy \
+ az spring app deploy \
--name customers-service \ --artifact-path spring-petclinic-customers-service/target/spring-petclinic-customers-service-2.3.6.jar \ --config-file-patterns customers-service
To build locally, use the following steps:
1. Query the application status after deployment by using the following command: ```azurecli
- az spring-cloud app list --output table
+ az spring app list --output table
``` This command produces output similar to the following example:
To check or update the current settings in Application Insights, use the followi
To create an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding create \
+az spring build-service builder buildpack-binding create \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding create \
To list all buildpack bindings, and find Application Insights bindings for the type `ApplicationInsights`, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding list \
+az spring build-service builder buildpack-binding list \
--resource-group <your-resource-group-name> \ --service <your-service-resource-name> \ --builder-name <your-builder-name>
az spring-cloud build-service builder buildpack-binding list \
To replace an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding set \
+az spring build-service builder buildpack-binding set \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding set \
To get an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding show \
+az spring build-service builder buildpack-binding show \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
az spring-cloud build-service builder buildpack-binding show \
To delete an Application Insights buildpack binding, use the following command: ```azurecli
-az spring-cloud build-service builder buildpack-binding delete \
+az spring build-service builder buildpack-binding delete \
--resource-group <your-resource-group-name> \ --service <your-service-instance-name> \ --name <your-binding-name> \
spring-apps Quickstart Automate Deployments Github Actions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-automate-deployments-github-actions-enterprise.md
Last updated 05/31/2022-+ # Quickstart: Automate deployments
This quickstart shows you how to automate deployments to Azure Spring Apps Enter
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Configure Single Sign On Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md
This quickstart shows you how to configure single sign-on for applications runni
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A license for Azure Spring Apps Enterprise tier. For more information, see [Enterprise tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.37.0 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Deploy Apps Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps-enterprise.md
This quickstart shows you how to build and deploy applications to Azure Spring A
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/).-- [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)] ## Download the sample app
Use the following steps to provision an Azure Spring Apps service instance.
1. Use the following commands to retrieve the Resource ID for your Log Analytics Workspace and Azure Spring Apps service instance:
- ```bash
+ ```azurecli
LOG_ANALYTICS_RESOURCE_ID=$(az monitor log-analytics workspace show \ --resource-group <resource-group-name> \
- --workspace-name <workspace-name> | jq -r '.id')
+ --workspace-name <workspace-name> \
+ --query id \
+ --output tsv)
- SPRING_CLOUD_RESOURCE_ID=$(az spring show \
+ AZURE_SPRING_APPS_RESOURCE_ID=$(az spring show \
--resource-group <resource-group-name> \
- --name <Azure-Spring-Apps-service-instance-name> | jq -r '.id')
+ --name <Azure-Spring-Apps-service-instance-name> \
+ --query id \
+ --output tsv)
``` 1. Use the following command to configure diagnostic settings for the Azure Spring Apps Service:
Use the following steps to provision an Azure Spring Apps service instance.
```azurecli az monitor diagnostic-settings create \ --name "send-logs-and-metrics-to-log-analytics" \
- --resource ${SPRING_CLOUD_RESOURCE_ID} \
+ --resource ${AZURE_SPRING_APPS_RESOURCE_ID} \
--workspace ${LOG_ANALYTICS_RESOURCE_ID} \ --logs '[ {
Use the following steps to configure Spring Cloud Gateway and configure routes t
```azurecli GATEWAY_URL=$(az spring gateway show \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --query properties.url \
+ --output tsv)
az spring gateway update \ --resource-group <resource-group-name> \
Use the following steps to configure Spring Cloud Gateway and configure routes t
```azurecli GATEWAY_URL=$(az spring gateway show \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --query properties.url \
+ --output tsv)
echo "https://${GATEWAY_URL}" ```
Use the following steps to configure API Portal.
```azurecli PORTAL_URL=$(az spring api-portal show \ --resource-group <resource-group-name> \
- --service <Azure-Spring-Apps-service-instance-name> | jq -r '.properties.url')
+ --service <Azure-Spring-Apps-service-instance-name> \
+ --query properties.url \
+ --output tsv)
echo "https://${PORTAL_URL}" ```
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps.md
This article explains how to build and deploy Spring applications to Azure Sprin
- [Set up Azure Spring Apps Config Server](./quickstart-setup-config-server.md). - [JDK 17](/azure/developer/java/fundamentals/java-jdk-install) - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Optionally, [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+- Optionally, [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
- Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/). #### [CLI](#tab/Azure-CLI)
spring-apps Quickstart Deploy Infrastructure Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md
description: This quickstart shows you how to use Bicep to deploy an Azure Sprin
-+ Last updated 05/31/2022
spring-apps Quickstart Deploy Infrastructure Vnet Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md
description: This quickstart shows you how to use Terraform to deploy an Azure S
-+ Last updated 05/31/2022
spring-apps Quickstart Deploy Infrastructure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md
-+ Last updated 05/31/2022
spring-apps Quickstart Integrate Azure Database And Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md
This article uses these services for demonstration purposes. You can connect you
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Key Vault Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-key-vault-enterprise.md
Every application has properties that connect it to its environment and supporti
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Monitor End To End Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-monitor-end-to-end-enterprise.md
This quickstart shows you how monitor apps running Azure Spring Apps Enterprise
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-service-instance.md
In this quickstart, you use the Azure CLI to provision an instance of the Azure
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet-core/3.1). The Azure Spring Apps service supports .NET Core 3.1 and later versions.-- [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). ## Install Azure CLI extension
-Verify that your Azure CLI version is 2.0.67 or later:
+Verify that your Azure CLI version is 2.45.0 or later:
```azurecli az --version
az --version
Install the Azure Spring Apps extension for the Azure CLI using the following command: ```azurecli
-az extension add --name spring
+az extension add --upgrade --name spring
``` ## Sign in to Azure
az extension add --name spring
``` ```azurecli
- az config set defaults.spring-cloud=<service instance name>
+ az config set defaults.spring=<service instance name>
``` ::: zone-end
The following procedure uses the Azure CLI extension to provision an instance of
1. Set your default resource group name and Spring Cloud service name using the following command: ```azurecli
- az config set defaults.group=<resource group name> defaults.spring-cloud=<service name>
+ az config set defaults.group=<resource group name> defaults.spring=<service name>
```
spring-apps Quickstart Provision Standard Consumption App Environment With Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-app-environment-with-virtual-network.md
You can also deploy your Azure Container Apps environment to an existing virtual
## Prerequisites - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
## Create an Azure Spring Apps instance in an Azure Container Apps environment
spring-apps Quickstart Provision Standard Consumption Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-provision-standard-consumption-service-instance.md
This article describes how to create a Standard consumption plan in Azure Spring
## Prerequisites - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.28.0 or higher.
+- (Optional) [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.
## Provision a Standard consumption plan instance
spring-apps Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md
Rate limiting enables you to avoid problems that arise with spikes in traffic. W
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Understand and fulfill the [Requirements](how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise Tier in Azure Marketplace](how-to-enterprise-marketplace-offer.md).-- [The Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli).
+- [The Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli).
- [Git](https://git-scm.com/). - [jq](https://stedolan.github.io/jq/download/) - [!INCLUDE [install-enterprise-extension](includes/install-enterprise-extension.md)]
spring-apps Quickstart Setup Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-setup-config-server.md
This command tells Config Server to find the configuration data in the [steeltoe
- [JDK 17](/azure/developer/java/fundamentals/java-jdk-install) - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- Optionally, [Azure CLI version 2.44.0 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring`
+- Optionally, [Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --upgrade --name spring`
- Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/). ## Config Server procedures
spring-apps Tutorial Managed Identities Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-functions.md
Last updated 07/10/2020
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-This article shows you how to create a managed identity for an Azure Spring Apps app and use it to invoke Http triggered Functions.
+This article shows you how to create a managed identity for an Azure Spring Apps app and use it to invoke HTTP triggered Functions.
-Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By leveraging this built-in authentication capability along with Managed Identities for Azure Spring Apps, we can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
+Both Azure Functions and App Services have built in support for Azure Active Directory (Azure AD) authentication. By using this built-in authentication capability along with Managed Identities for Azure Spring Apps, we can invoke RESTful services using modern OAuth semantics. This method doesn't require storing secrets in code and provides more granular controls for controlling access to external resources.
## Prerequisites * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli)
-* [Install Maven 3.0 or above](https://maven.apache.org/download.cgi)
+* [Install the Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli)
+* [Install Maven 3.0 or higher](https://maven.apache.org/download.cgi)
* [Install the Azure Functions Core Tools version 3.0.2009 or higher](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools) ## Create a resource group
az group create --name myResourceGroup --location eastus
To create a Function app you must first create a backing storage account, use the command [az storage account create](/cli/azure/storage/account#az-storage-account-create):
-> [!Important]
+> [!IMPORTANT]
> Each Function app and Storage Account must have a unique name. Replace *\<your-functionapp-name>* with the name of your Function app and *\<your-storageaccount-name>* with the name of your Storage Account in the following examples. ```azurecli
-az storage account create --name <your-storageaccount-name> --resource-group myResourceGroup --location eastus --sku Standard_LRS
+az storage account create \
+ --resource-group myResourceGroup \
+ --name <your-storageaccount-name> \
+ --location eastus \
+ --sku Standard_LRS
```
-Once the Storage Account has been created, you can create the Function app.
+After the Storage Account is created, you can create the Function app.
```azurecli
-az functionapp create --name <your-functionapp-name> --resource-group myResourceGroup --consumption-plan-location eastus --os-type windows --runtime node --storage-account <your-storageaccount-name> --functions-version 3
+az functionapp create \
+ --resource-group myResourceGroup \
+ --name <your-functionapp-name> \
+ --consumption-plan-location eastus \
+ --os-type windows \
+ --runtime node \
+ --storage-account <your-storageaccount-name> \
+ --functions-version 3
```
-Make a note of the returned **hostNames**, which will be in the format *https://\<your-functionapp-name>.azurewebsites.net*. It will be used in a following step.
+Make a note of the returned `hostNames` value, which is in the format *https://\<your-functionapp-name>.azurewebsites.net*. You use this value in a following step.
## Enable Azure Active Directory Authentication
-Access the newly created Function app from the [Azure portal](https://portal.azure.com) and select "Authentication / Authorization" from the settings menu. Enable App Service Authentication and set the "Action to take when request is not authenticated" to "Log in with Azure Active Directory". This setting will ensure that all unauthenticated requests are denied (401 response).
+Access the newly created Function app from the [Azure portal](https://portal.azure.com) and select **Authentication / Authorization** from the settings menu. Enable App Service Authentication and set the **Action to take when request is not authenticated** to **Log in with Azure Active Directory**. This setting ensures that all unauthenticated requests are denied (401 response).
-![Authentication settings showing Azure Active Directory as the default provider](media/spring-cloud-tutorial-managed-identities-functions/function-auth-config-1.jpg)
-Under Authentication Providers, select Azure Active Directory to configure the application registration. Selecting Express Management Mode will automatically create an application registration in your Azure AD tenant with the correct configuration.
+Under **Authentication Providers**, select **Azure Active Directory** to configure the application registration. Selecting **Express Management Mode** automatically creates an application registration in your Azure AD tenant with the correct configuration.
-![Azure Active Directory provider set to Express Management Mode](media/spring-cloud-tutorial-managed-identities-functions/function-auth-config-2.jpg)
-Once you save the settings, the function app will restart and all subsequent requests will be prompted to log in via Azure AD. You can test that unauthenticated requests are now being rejected by navigating to the function apps root URL (returned in the **hostNames** output in the step above). You should be redirected to your organizations Azure AD login screen.
+After you save the settings, the function app restarts and all subsequent requests are prompted to log in via Azure AD. You can test that unauthenticated requests are now being rejected by navigating to the function apps root URL (returned in the `hostNames` output in a previous step). You should be redirected to your organizations Azure AD login screen.
-## Create an Http Triggered Function
+## Create an HTTP Triggered Function
-In an empty local directory, create a new function app and add an Http triggered function.
+In an empty local directory, create a new function app and add an HTTP triggered function.
```console func init --worker-runtime node func new --template HttpTrigger --name HttpTrigger ```
-By default Functions use key-based authentication to secure Http endpoints. Since we'll be enabling Azure AD authentication to secure access to the Functions, we want to [set the function auth level to anonymous](../azure-functions/functions-bindings-http-webhook-trigger.md#secure-an-http-endpoint-in-production) in the *function.json* file.
+By default, Functions use key-based authentication to secure HTTP endpoints. Since we're enabling Azure AD authentication to secure access to the Functions, we want to [set the function auth level to anonymous](../azure-functions/functions-bindings-http-webhook-trigger.md#secure-an-http-endpoint-in-production) in the *function.json* file.
```json {
By default Functions use key-based authentication to secure Http endpoints. Sinc
} ```
-The app can now be published to the [Function app](#create-a-function-app) instance created in the previous step.
+You can now publish the app to the [Function app](#create-a-function-app) instance created in the previous step.
```console func azure functionapp publish <your-functionapp-name>
Functions in <your-functionapp-name>:
After installing the spring extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`. ```azurecli
-az extension add --name spring
-az spring create --name mymsispringcloud --resource-group myResourceGroup --location eastus
+az extension add --upgrade --name spring
+az spring create \
+ --resource-group myResourceGroup \
+ --name mymsispringcloud \
+ --location eastus
``` The following example creates an app named `msiapp` with a system-assigned managed identity, as requested by the `--assign-identity` parameter. ```azurecli
-az spring app create --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --assign-endpoint true --assign-identity
+az spring app create \
+ --resource-group "myResourceGroup" \
+ --service "mymsispringcloud" \
+ --name "msiapp" \
+ --assign-endpoint true \
+ --assign-identity
``` ## Build sample Spring Boot app to invoke the Function
-This sample will invoke the Http triggered function by first requesting an access token from the [MSI endpoint](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) and using that token to authenticate the Function http request.
+This sample invokes the HTTP triggered function by first requesting an access token from the [MSI endpoint](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) and using that token to authenticate the Function http request.
1. Clone the sample project.
- ```bash
- git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
- ```
+ ```bash
+ git clone https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples.git
+ ```
-2. Specify your function URI and the trigger name in your app properties.
+1. Specify your function URI and the trigger name in your app properties.
- ```bash
- cd Azure-Spring-Cloud-Samples/managed-identity-function
- vim src/main/resources/application.properties
- ```
+ ```bash
+ cd Azure-Spring-Cloud-Samples/managed-identity-function
+ vim src/main/resources/application.properties
+ ```
- To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
+ To use managed identity for Azure Spring Apps apps, add properties with the following content to *src/main/resources/application.properties*.
- ```properties
- azure.function.uri=https://<your-functionapp-name>.azurewebsites.net
- azure.function.triggerPath=httptrigger
- ```
+ ```properties
+ azure.function.uri=https://<your-functionapp-name>.azurewebsites.net
+ azure.function.triggerPath=httptrigger
+ ```
-3. Package your sample app.
+1. Package your sample app.
- ```bash
- mvn clean package
- ```
+ ```bash
+ mvn clean package
+ ```
-4. Now deploy the app to Azure with the Azure CLI command `az spring app deploy`.
+1. Now deploy the app to Azure with the Azure CLI command `az spring app deploy`.
- ```azurecli
- az spring app deploy --name "msiapp" --service "mymsispringcloud" --resource-group "myResourceGroup" --jar-path target/sc-managed-identity-function-sample-0.1.0.jar
- ```
+ ```azurecli
+ az spring app deploy \
+ --resource-group "myResourceGroup" \
+ --service "mymsispringcloud" \
+ --name "msiapp" \
+ --jar-path target/asc-managed-identity-function-sample-0.1.0.jar
+ ```
-5. Access the public endpoint or test endpoint to test your app.
+1. Access the public endpoint or test endpoint to test your app.
- ```bash
- curl https://mymsispringcloud-msiapp.azuremicroservices.io/func/springcloud
- ```
+ ```bash
+ curl https://mymsispringcloud-msiapp.azuremicroservices.io/func/springcloud
+ ```
- You'll see the following message returned in the response body.
+ You see the following message returned in the response body.
- ```output
- Function Response: Hello, springcloud. This HTTP triggered function executed successfully.
- ```
-
- You can try passing different values to the function by changing the path parameter.
+ ```output
+ Function Response: Hello, springcloud. This HTTP triggered function executed successfully.
+ ```
## Next steps
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/tutorial-managed-identities-key-vault.md
**This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article shows you how to create a managed identity for an Azure Spring Apps app and use it to access Azure Key Vault.
The following video describes how to manage secrets using Azure Key Vault.
## Prerequisites * [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* [Install the Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli)
-* [Install Maven 3.0 or above](https://maven.apache.org/download.cgi)
+* [Install the Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli)
+* [Install Maven 3.0 or higher](https://maven.apache.org/download.cgi)
## Create a resource group A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az-group-create): ```azurecli
-az group create --name "myResourceGroup" -l "EastUS"
+az group create --name "myResourceGroup" --location "EastUS"
``` ## Set up your Key Vault
az keyvault create \
--name "<your-keyvault-name>" ```
-Make a note of the returned `vaultUri`, which will be in the format `https://<your-keyvault-name>.vault.azure.net`. It will be used in the following step.
+Make a note of the returned `vaultUri`, which is in the format `https://<your-keyvault-name>.vault.azure.net`. You use this value in the following step.
You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set):
az keyvault secret set \
After installing corresponding extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`. ```azurecli
-az extension add --name spring
+az extension add --upgrade --name spring
az spring create \ --resource-group <your-resource-group-name> \ --name <your-Azure-Spring-Apps-instance-name>
The following example creates an app named `springapp` with a system-assigned ma
```azurecli az spring app create \ --resource-group <your-resource-group-name> \
- --name "springapp" \
--service <your-Azure-Spring-Apps-instance-name> \
+ --name "springapp" \
--assign-endpoint true \ --system-assigned
-export SERVICE_IDENTITY=$(az spring app show --name "springapp" -s "myspringcloud" -g "myResourceGroup" | jq -r '.identity.principalId')
+export SERVICE_IDENTITY=$(az spring app show \
+ --resource-group "<your-resource-group-name>" \
+ --service "<your-Azure-Spring-Apps-instance-name>" \
+ --name "springapp" \
+ | jq -r '.identity.principalId')
``` ### [User-assigned managed identity](#tab/user-assigned-managed-identity)
-First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`. Save the client ID for the property configuration below.
+First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`. Save the client ID for the property configuration.
:::image type="content" source="media/tutorial-managed-identities-key-vault/app-user-managed-identity-key-vault.png" alt-text="Screenshot of Azure portal showing the Managed Identity Properties screen with 'Resource ID', 'Principle ID' and 'Client ID' highlighted." lightbox="media/tutorial-managed-identities-key-vault/app-user-managed-identity-key-vault.png":::
-```azurecli
-export SERVICE_IDENTITY={principal ID of user-assigned managed identity}
-export USER_IDENTITY_RESOURCE_ID={resource ID of user-assigned managed identity}
+```bash
+export SERVICE_IDENTITY=<principal-ID-of-user-assigned-managed-identity>
+export USER_IDENTITY_RESOURCE_ID=<resource-ID-of-user-assigned-managed-identity>
``` The following example creates an app named `springapp` with a user-assigned managed identity, as requested by the `--user-assigned` parameter.
The following example creates an app named `springapp` with a user-assigned mana
```azurecli az spring app create \ --resource-group <your-resource-group-name> \
- --name "springapp" \
--service <your-Azure-Spring-Apps-instance-name> \
- --assign-endpoint true \
- --user-assigned $USER_IDENTITY_RESOURCE_ID
+ --name "springapp" \
+ --user-assigned $USER_IDENTITY_RESOURCE_ID \
+ --assign-endpoint true
az spring app show \ --resource-group <your-resource-group-name> \
- --name "springapp" \
- --service <your-Azure-Spring-Apps-instance-name>
+ --service <your-Azure-Spring-Apps-instance-name> \
+ --name "springapp"
```
-Make a note of the returned URL, which will be in the format `https://<your-app-name>.azuremicroservices.io`. This URL will be used in the following step.
+Make a note of the returned URL, which is in the format `https://<your-app-name>.azuremicroservices.io`. You use this value in the following step.
## Grant your app access to Key Vault
az keyvault set-policy \
## Build a sample Spring Boot app with Spring Boot starter
-This app will have access to get secrets from Azure Key Vault. Use the Azure Key Vault Secrets Spring boot starter. Azure Key Vault is added as an instance of Spring **PropertySource**. Secrets stored in Azure Key Vault can be conveniently accessed and used like any externalized configuration property, such as properties in files.
+This app has access to get secrets from Azure Key Vault. Use the Azure Key Vault Secrets Spring boot starter. Azure Key Vault is added as an instance of Spring **PropertySource**. Secrets stored in Azure Key Vault can be conveniently accessed and used like any externalized configuration property, such as properties in files.
1. Use the following command to generate a sample project from `start.spring.io` with Azure Key Vault Spring Starter.
- ```azurecli
+ ```bash
curl https://start.spring.io/starter.tgz -d dependencies=web,azure-keyvault -d baseDir=springapp -d bootVersion=2.7.2 -d javaVersion=1.8 | tar -xzvf - ``` 1. Specify your Key Vault in your app.
- ```azurecli
+ ```bash
cd springapp vim src/main/resources/application.properties ```
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
> [!NOTE]
- > You must add the key vault URL in the *application.properties* file as shown above. Otherwise, the key vault URL may not be captured during runtime.
+ > You must add the key vault URL in the *application.properties* file as shown previously. Otherwise, the key vault URL may not be captured during runtime.
1. Add the following code example to *src/main/java/com/example/demo/DemoApplication.java*. This code retrieves the connection string from the key vault.
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
} ```
- If you open the *pom.xml* file, you'll see the dependency of `spring-cloud-azure-starter-keyvault`.
+ If you open the *pom.xml* file, you can see the `spring-cloud-azure-starter-keyvault` dependency, as shown in the following example:
```xml <dependency>
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
1. Use the following command to package your sample app.
- ```azurecli
+ ```bash
./mvnw clean package -DskipTests ```
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
```azurecli az spring app deploy \ --resource-group <your-resource-group-name> \
- --name "springapp" \
--service <your-Azure-Spring-Apps-instance-name> \
+ --name "springapp" \
--artifact-path target/demo-0.0.1-SNAPSHOT.jar ``` 1. To test your app, access the public endpoint or test endpoint by using the following command:
- ```azurecli
+ ```bash
curl https://myspringcloud-springapp.azuremicroservices.io/get ```
- You'll see the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ You're shown the message `Successfully got the value of secret connectionString from Key Vault https://<your-keyvault-name>.vault.azure.net/: jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
## Next steps
static-web-apps Database Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/database-azure-cosmos-db.md
Next, create the configuration file that your static web app uses to interface w
The `init` command creates the *staticwebapp.database.config.json* file in the *swa-db-connections* folder.
-1. Paste in this sample schema into the *staticwebapp.schema.config.json* file you generated.
+1. Paste in this sample schema into the *staticwebapp.database.schema.gql* file you generated.
- Since Cosmos DB for NoSQL is a schema agnostic database, Azure Static Web Apps database connections can't extract the schema of your database. The *staticwebapp.schema.config.json* file allows you to specify the schema of your Cosmos DB for NoSQL database for Static Web Apps.
+ Since Cosmos DB for NoSQL is a schema agnostic database, Azure Static Web Apps database connections can't extract the schema of your database. The *staticwebapp.database.schema.gql* file allows you to specify the schema of your Cosmos DB for NoSQL database for Static Web Apps.
```gql type Person @model {
static-web-apps Publish Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-azure-resource-manager.md
description: Create and deploy an ARM Template for Static Web Apps
+ Last updated 07/13/2021 - # Tutorial: Publish Azure Static Web Apps using an ARM Template
Clean up the resources you deployed by deleting the resource group.
## Next steps > [!div class="nextstepaction"]
-> [Configure your static web app](./configuration.md)
+> [Configure your static web app](./configuration.md)
storage-mover Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/billing.md
Last updated 03/22/2023
<!-- !########################################################
-STATUS: IN REVIEW
CONTENT: final (85/100)
storage-mover Job Definition Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/job-definition-create.md
Initial doc score: 100 (1532 words and 0 issues)
# How to define and start a migration job
-When you migrate a share to Azure, you'll need to describe the source share, the Azure target, and any migration settings you want to apply. These attributes are defined in a job definition within your storage mover resource. This article describes how to create and run such a job definition.
+When you migrate a share to Azure, you need to describe the source share, the Azure target, and any migration settings you want to apply. These attributes are defined in a job definition within your storage mover resource. This article describes how to create and run such a job definition.
## Prerequisites
There are three prerequisites to the definition the migration of your source sha
Follow the steps in the *[Create a storage mover resource](storage-mover-create.md)* article to deploy a storage mover resource to the desired region within your Azure subscription. 1. You need to deploy and register an Azure Storage Mover agent virtual machine (VM). Follow the steps in the [Azure Storage Mover agent VM deployment](agent-deploy.md) and [agent registration](agent-register.md) articles to deploy at least one agent.
-1. Finally, to define a migration, you'll need to create a job definition.
- Job definitions are organized in a migration project. You'll need at least one migration project in your storage mover resource. If you haven't already, follow the deployment steps in the [manage projects](project-manage.md) article to create a migration project.
+1. Finally, to define a migration, you need to create a job definition.
+ Job definitions are organized in a migration project. You need at least one migration project in your storage mover resource. If you haven't already, follow the deployment steps in the [manage projects](project-manage.md) article to create a migration project.
## Create and start a job definition
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/endpoint-source-new-sml.png" alt-text="Screen capture of the Source tab illustrating the location of the New Source Endpoint fields." lightbox="media/job-definition-create/endpoint-source-new-lrg.png":::
- By default, migration jobs will start from the root of your share. However, if your use case involves copying data from a specific path within your source share, you can provide the path in the **Sub-path** field. Supplying this value will start the data migration from the location you've specified. If the sub path you've specified isn't found, no data will be copied.
+ <a name="sub-path"></a>
+ By default, migration jobs start from the root of your share. However, if your use case involves copying data from a specific path within your source share, you can provide the path in the **Sub-path** field. Supplying this value will start the data migration from the location you've specified. If the sub path you've specified isn't found, no data will be copied.
- Prior to creating an endpoint and a job resource, it's important to verify that the path you've provided is correct and that the data is accessible. You're unable to modify endpoints or job resources after they're created. If the specified path is wrong, you'll need to delete the resources and re-create them.
+ Prior to creating an endpoint and a job resource, it's important to verify that the path you've provided is correct and that the data is accessible. You're unable to modify endpoints or job resources after they're created. If the specified path is wrong, your only option is to delete the resources and re-create them.
Values for host, share name, and subpath are concatenated to form the full migration source path. The path is displayed in the **Full path** field within the **Verify full path** section. Copy the path provided and verify that you're able to access it before committing your changes.
Refer to the [resource naming convention](../azure-resource-manager/management/r
:::image type="content" source="media/job-definition-create/endpoint-target-new-sml.png" alt-text="Screen capture of the Target tab illustrating the location of the New Target Endpoint fields." lightbox="media/job-definition-create/endpoint-target-new-lrg.png":::
- A target subpath value can be used to specify a location within the target container where your migrated data will be copied. The subpath value is relative to the container's root. Omitting the subpath value will result in the data being copied to the root, while providing a unique value will generate a new subfolder.
+ A target subpath value can be used to specify a location within the target container where your migrated data will be copied. The subpath value is relative to the container's root. Omitting the subpath value results in the data being copied to the root, while providing a unique value will generate a new subfolder.
After ensuring the accuracy of your settings, select **Next** to continue.
-1. Within the **Settings** tab, take note of the settings associated with the **Copy mode** and **Migration outcomes**. The service's **copy mode** will affect the behavior of the migration engine when files or folders change between copy iterations.
+1. Within the **Settings** tab, take note of the settings associated with the **Copy mode** and **Migration outcomes**. The service's **copy mode** affects the behavior of the migration engine when files or folders change between copy iterations.
- The current release of Azure Storage Mover only supports **merge** mode.
+ <a name="copy-modes"></a>
+ **Merge source into target:**
- Files will be kept in the target, even if they donΓÇÖt exist in the source. - Files with matching names and paths will be updated to match the source.
- - Folder renames between copies may lead to duplicate content in the target.
+ - File or folder renames between copies lead to duplicate content in the target.
+
+ **Mirror source to target:**
- **Migration outcomes** are based upon the specific storage types of the source and target endpoints. For example, because blob storage only supports "virtual" folders, source files in folders will have their paths prepended to their names and placed in a flat list within a blob container. Empty folders will be represented as an empty blob in the target. Source folder metadata will be persisted in the custom metadata field of a blob, as they are with files.
+ - Files in the target will be deleted if they donΓÇÖt exist in the source.
+ - Files and folders in the target will be updated to match the source.
+ - File or folder renames between copies won't lead to duplicate content. A renamed item on the source side leads to the deletion of the item with the original name in the target. Additionally, the renamed item is also uploaded to the target. If the renamed item is a folder, the described behavior of delete and reupload applies to all files and folders contained in it. Avoid renaming folders during a migration, especially near the root level of your source data.
+
+ **Migration outcomes** are based upon the specific storage types of the source and target endpoints. For example, because blob storage only supports "virtual" folders, source files in folders will have their paths prepended to their names and placed in a flat list within a blob container. Empty folders will be represented as an empty blob in the target. Source folder metadata is persisted in the custom metadata field of a blob, as they are with files.
After viewing the effects of the copy mode and migration outcomes, select **Next** to review the values from the previous tabs.
Refer to the [resource naming convention](../azure-resource-manager/management/r
### [PowerShell](#tab/powershell)
-You'll need to use several cmdlets to create a new job definition.
+You need to use several cmdlets to create a new job definition.
Use the `New-AzStorageMoverJobDefinition` cmdlet to create new job definition resource in a project. The following example assumes that you aren't reusing *storage endpoints* you've previously created.
New-AzStorageMoverAzStorageContainerEndpoint `
$projectName = "Your project name" $jobDefName = "Your job definition name" $JobDefDescription = "Optional, up to 1024 characters"
-$jobDefCopyMode = "Additive"
+$jobDefCopyMode = "Additive" # Merges source into target. See description in portal tab.
+#$jobDefCopyMode = "Mirror" # Mirrors source into target. See description in portal tab.
$agentName = "The name of an agent previously registered to the same storage mover resource"
New-AzStorageMoverJobDefinition `
## Next steps
-Now that you've created a job definition with source and target endpoints, learn how to estimate the time required to perform your migration job. Learn about Azure Storage Mover performance targets by visiting the article suggested below.
+Now that you've created a job definition with source and target endpoints, learn how to estimate the time required to perform your migration job.
> [!div class="nextstepaction"] > [Azure Storage Mover scale and performance targets](performance-targets.md)
storage-mover Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/release-notes.md
Title: Release notes for the Azure Storage Mover service | Microsoft Docs
description: Read the release notes for the Azure Storage Mover service, which allows you to migrate your on-premises unstructured data to the Azure Storage service. --- Previously updated : 06/21/2022 ++ Last updated : 4/14/2022
-<!--
-!########################################################
-STATUS: DRAFT
-
-CONTENT:
-
-REVIEW Stephen/Fabian: not reviewed
-REVIEW Engineering: not reviewed
-
-!########################################################
>- # Release notes for the Azure Storage Mover service Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements to its cloud service and the agent components. New features often require a matching agent version that supports them. This article provides a summary of key improvements for each service and agent version combination that is released. The article also points out limitations and if possible, workarounds for identified issues.
Azure Storage Mover is a hybrid service, which continuously introduces new featu
The following Azure Storage Mover agent versions are supported:
-| Milestone | Version number | Release date | Status |
-||-|--|--|
-| Public preview release | 0.1.116 | September 15, 2022 | Supported |
+| Milestone | Version number | Release date | Status |
+||-|--|-|
+| General availability release | 1.0.229 | April 17, 2023 | Supported |
+| Public preview release | 0.1.116 | September 15, 2022 | Functioning. No longer supported by Microsoft Azure Support teams.|
### Azure Storage Mover update policy
-The Azure Storage Mover agents aren't automatically updated to new versions at this time. New functionality and fixes to any issues will require the [download](https://aka.ms/StorageMover/agent), [deployment](agent-deploy.md) and [registration](agent-register.md) of a new Storage Mover agent.
+Preview agents aren't automatically updated.
+Beginning with the general availability release of service and agent, all GA Azure Storage Mover agents are automatically updated to future versions. GA and newer agents automatically download and apply new functionality and bug fixes. If you need to [deploy another Storage Mover agent](agent-deploy.md), you can find the latest available agent version on [Microsoft Download Center](https://aka.ms/StorageMover/agent). Be sure to [register](agent-register.md) your newly deployed agent before you can utilize them for your migrations.
-> [!TIP]
-> Switching to the latest agent version can be done safely. Follow the section Upgrading to a newer agent version in the agent deployment article.
+The automatic agent update doesn't affect running migration jobs. Running jobs are allowed to complete before the update is locally applied on the agent. Any errors during the update process result in the automatic use of the previous agent version. In parallel, a new update attempt is started automatically. This behavior ensures an uninterrupted migration experience.
-New agent versions will be released on Microsoft Download Center. [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent) We recommend retiring old agents and deploying agents of the current version, when they become available.
+> [!TIP]
+> Always download the latest agent version from Microsoft Download Center. [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent). Redistributing previously downloaded images may no longer be supported (check the [Supported agent versions](#supported-agent-versions) table), or they need to update themselves prior to being ready for use. Speed up your deployments by always obtaining a the latest image from Microsoft Download Center.
#### Major vs. minor versions
New agent versions will be released on Microsoft Download Center. [https://aka.m
#### Lifecycle and change management guarantees
-Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements. Azure Storage Mover agent versions can only be supported for a limited time. To facilitate your deployment, the following rules guarantee you have enough time, and notification to accommodate agent updates/upgrades in your change management process:
+Azure Storage Mover is a hybrid service, which continuously introduces new features and improvements. Azure Storage Mover agent versions can only be supported for a limited time. Agents automatically update themselves to the latest version. There's no need to manage any part of the self-update process. However, agents need to be running and connected to the internet to check for updates. To facilitate updates to agents that haven't been running for a while:
- Major versions are supported for at least six months from the date of initial release. - We guarantee there's an overlap of at least three months between the support of major agent versions.-- Warnings are issued for registered servers using a soon-to-be expired agent at least three months prior to expiration. You can check if a registered server is using an older version of the agent in the registered agents section of a storage mover resource.
+- The [Supported agent versions](#supported-agent-versions) table lists expiration dates. Agent versions that have expired, might still be able to update themselves to a supported version but there are no guarantees.
+
+> [!IMPORTANT]
+> Preview versions of the Storage Mover agent cannot update themselves. You must replace them manually by deploying the [latest available agent](https://aka.ms/StorageMover/agent).
+
+## 2023 April 17
+
+General availability release notes for:
+
+- Service version: April 17, 2023
+- Agent version: 1.0.229
+
+### Migration scenarios
+
+Support for a migration from an NFS (v3 / v4) source share to an Azure blob container (not [HNS enabled](../storage/blobs/data-lake-storage-namespace.md)).
+
+### Migration options
+
+In addition to merging content from the source to the target (public preview), the service now supports another migration option: Mirror content from source to target.
+
+- Files in the target will be deleted if they donΓÇÖt exist in the source.
+- Files and folders in the target will be updated to match the source.
+- Folder renames between copies will lead to the deletion of the cloud content and reupload of anything contained in the renamed folder on the source.
+
+### Service
+
+The service now supports viewing copy logs and job logs in the Azure portal. An Azure Log Analytics workspace must be configured to receive the logs. This configuration is done once for a Storage Mover resource and applies to all agents and migration jobs in that Storage Mover resource. To configure an existing Storage Mover resource or learn how to create a new Storage Mover resource with this configuration, follow the steps in the article: [How to enable Azure Storage Mover copy and job logs](log-monitoring.md).
+
+It's possible to send the logs to a third party monitoring solution and even into a raw file in a storage account. However, the Storage Mover migration job blade in the Azure portal can only query a Log Analytics workspace for the logs. To get an integrated experience, be sure to select a Log Analytics workspace as a target.
+
+### Agent
+
+Private link connections from the agent into Azure are supported. Data that is migrated can travel from the agent over a private link connection to the target storage account in Azure. Agent registration can also be accomplished over a private link connection. Agent control messages (jobs, logs) can only be sent over the public endpoint of the Storage Mover agent gateway. If using a firewall or proxy server to restrict public access, make sure the following URL isn't blocked: *.agentgateway.prd.azsm.azure.com. The concrete URL is determined by the Azure region of the Storage Mover resource the agent is registered with.
## 2022 September 15
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
Last updated 11/09/2022
+ # Overview: Remediating anonymous public read access for blob data
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
ms.devlang: powershell, azurecli-+ # Remediate anonymous public read access to blob data (Azure Resource Manager deployments)
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
An inventory job can take a longer amount of time in these cases:
An inventory job might take more than one day to complete for hierarchical namespace enabled accounts that have hundreds of millions of blobs. Sometimes the inventory job fails and doesn't create an inventory file. If a job doesn't complete successfully, check subsequent jobs to see if they're complete before contacting support.
+- There is no option to generate a report retrospectively for a particular date.
+ #### Inventory jobs can't write reports to containers that have an object replication policy An object replication policy can prevent an inventory job from writing inventory reports to the destination container. Some other scenarios can archive the reports or make the reports immutable when they're partially completed which can cause inventory jobs to fail.
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
All blob data is stored within containers, so you'll need at least one container
```azurepowershell #Create a container object
-$container = New-AzStorageContainer -Name "myContainer" -Context $ctx
+$container = New-AzStorageContainer -Name "mycontainer" -Context $ctx
``` When you use the following examples, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
The following example specifies a `-File` parameter value to upload a single, na
```azurepowershell #Set variables $path = "C:\temp\"
-$containerName = "myContainer"
+$containerName = "mycontainer"
$filename = "demo-file.txt" $imageFiles = $path + "*.png" $file = $path + $filename
The following example shows several approaches used to provide a list of blobs.
```azurepowershell #Set variables $namedContainer = "named-container"
-$demoContainer = "myContainer"
+$demoContainer = "mycontainer"
$containerPrefix = "demo" $maxCount = 1000
The following sample code provides an example of both single and multiple downlo
```azurepowershell #Set variables
-$containerName = "myContainer"
+$containerName = "mycontainer"
$path = "C:\temp\downloads\" $blobName = "demo-file.txt" $fileList = "*.png"
To read blob properties or metadata, you must first retrieve the blob from the s
The following example retrieves a blob and lists its properties. ```azurepowershell
-$blob = Get-AzStorageBlob -Blob "blue-moon.mp3" -Container "myContainer" -Context $ctx
+$blob = Get-AzStorageBlob -Blob "blue-moon.mp3" -Container "mycontainer" -Context $ctx
$properties = $blob.BlobClient.GetProperties() Echo $properties.Value ```
The example below first updates and then commits a blob's metadata, and then ret
```azurepowershell #Set variable
-$container = "myContainer"
+$container = "mycontainer"
$blobName = "blue-moon.mp3" #Retrieve blob
You can delete either a single blob or series of blobs with the `Remove-AzStorag
```azurepowershell #Create variables
-$containerName = "myContainer"
+$containerName = "mycontainer"
$blobName = "demo-file.txt" $prefixName = "file"
To learn more about the soft delete data protection option, refer to the [Soft d
```azurepowershell $accountName ="myStorageAccount" $groupName ="myResourceGroup"
-$containerName ="myContainer"
+$containerName ="mycontainer"
$blobSvc = Get-AzStorageBlobServiceProperty `
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
The following flags are inherited from grandparent command [`blobfuse2`](blobfus
Unmount all BlobFuse2 mount points: ```bash
-blobfuse2 unmount all
+sudo blobfuse2 unmount all
``` ## See also
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
$file.ACL
``` > [!NOTE]
-> To a set the ACL of a specific group or user, use their respective object IDs. For example, to set the ACL of a **group**, use `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. To set the ACL of a **user**, use `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+> To set the ACL of a specific group or user, service principal, or managed identity, use their respective object IDs. For example, to set the ACL of a **group**, use `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. To set the ACL of a **user**, use `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
The following image shows the output after setting the ACL of a file.
Update-AzDataLakeGen2AclRecursive -Context $ctx -FileSystem $filesystemName -Pat
``` > [!NOTE]
-> To a set the ACL of a specific group or user, use their respective object IDs. For example, `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` or `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+> To set the ACL of a specific group or user, service principal, or managed identity, use their respective object IDs. For example, to set the ACL of a **group**, use `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. To set the ACL of a **user**, use `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
To see an example that updates ACLs recursively in batches by specifying a batch size, see the [Update-AzDataLakeGen2AclRecursive](/powershell/module/az.storage/update-azdatalakegen2aclrecursive) reference article.
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
You can apply ACL entries recursively on the existing child items of a parent di
To apply ACL entries recursively, Right-click the container or a directory, and then select **Propagate Access Control Lists**. The following screenshot shows the menu as it appears when you right-click a directory.
+> [!NOTE]
+> The **Propagate Access Control** Lists option is available only in Storage Explorer 1.28.1 or later versions.
+ > [!div class="mx-imgBorder"] > ![Right-clicking a directory and choosing the propagate access control setting](./media/data-lake-storage-explorer-acl/propagate-access-control-list-option.png)
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md).
+- By default, the Content-MD5 property of blobs that are uploaded by using SFTP are set to null. Therefore, if you want the Content-MD5 property of those blobs to contain an MD5 hash, your client must calculate that value, and then set the Content-MD5 property of the blob before the uploading the blob.
+
- Maximum file upload size via the SFTP endpoint is 100 GB. - To change the storage account's redundancy/replication settings or initiate account failover, SFTP must be disabled. SFTP may be re-enabled once the conversion has completed.
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-manage.md
Last updated 02/16/2023
ms.devlang: csharp-+ # Manage and restore soft-deleted blobs
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
Last updated 11/07/2017 +
storage Versioning Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-enable.md
Last updated 02/14/2023 -+ # Enable and manage blob versioning
storage Account Encryption Key Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/account-encryption-key-create.md
Last updated 06/09/2021
-+ # Create an account that supports customer-managed keys for tables and queues
A storage account that is created to use an encryption key scoped to the account
- [Azure Storage encryption for data at rest](storage-service-encryption.md) - [Customer-managed keys for Azure Storage encryption](customer-managed-keys-overview.md) - [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md)-- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md)
+- [Configure encryption with customer-managed keys stored in Azure Key Vault Managed HSM](customer-managed-keys-configure-key-vault-hsm.md)
storage Authorization Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md
Last updated 12/12/2019
+ # Use the Azure Storage resource provider to access management resources
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.s
#### Bicep template
-To enable and configure Microsoft Defender for Storage at the subscription level with per-transaction pricing using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
+To enable and configure Microsoft Defender for Storage at the storage account level using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
```bicep param accountName string
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft
### REST API
-To enable and configure Microsoft Defender for Storage at the subscription level using REST API, create a PUT request with this endpoint. Replace the `subscriptionId` , `resourceGroupName`, and `accountName` in the endpoint URL with your own Azure subscription ID, resource group and storage account names accordingly.
+To enable and configure Microsoft Defender for Storage at the storage account level using REST API, create a PUT request with this endpoint. Replace the `subscriptionId` , `resourceGroupName`, and `accountName` in the endpoint URL with your own Azure subscription ID, resource group and storage account names accordingly.
```http PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
Previously updated : 04/10/2023 Last updated : 04/13/2023
The process of migrating a classic storage account involves four steps:
1. **Validate**. During the Validation phase, Azure checks the storage account to ensure that it can be migrated. 1. **Prepare**. In the Prepare phase, Azure creates a new general-purpose v1 storage account and alerts you to any problems that may have occurred. The new account is created in a new resource group in the same region as your classic account.
- At this point your classic storage account still exists. If there are any problems reported, you can correct them or abort the process.
+ At this point, your classic storage account still exists. If there are any problems reported, you can correct them or abort the process.
1. **Check manually**. It's a good idea to make a manual check of the new storage account to make sure that the output is as you expect.
-1. **Commit or abort**. If you are satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
+1. **Commit or abort**. If you're satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
If there are any problems with the migration, then you can abort the migration at this point. If you choose to abort, the new resource group and new storage account are deleted. Your classic account remains available. You can address any problems and attempt the migration again.
To migrate a classic storage account to the Azure Resource Manager deployment mo
1. If the Prepare step completes successfully, you'll see a link to the new resource group. Select that link to navigate to the new resource group. The migrated storage account appears under the **Resources** tab in the **Overview** page for the new resource group.
- At this point you can compare the configuration and data in the classic storage account to the newly migrated storage account. You'll see both in the list of storage accounts in the portal. Both the classic account and the migrated account have the same name.
+ At this point, you can compare the configuration and data in the classic storage account to the newly migrated storage account. You'll see both in the list of storage accounts in the portal. Both the classic account and the migrated account have the same name.
:::image type="content" source="media/classic-account-migrate/compare-classic-migrated-accounts.png" alt-text="Screenshot showing the results of the Prepare step in the Azure portal." lightbox="media/classic-account-migrate/compare-classic-migrated-accounts.png":::
To migrate a classic storage account to the Azure Resource Manager deployment mo
# [PowerShell](#tab/azure-powershell)
-To migrate a classic storage account to the Azure Resource Manager deployment model with PowerShell, you'll need to use the Azure PowerShell Service Management module. To learn how to install this module, see [Install and configure the Azure PowerShell Service Management module](/powershell/azure/servicemanagement/install-azure-ps#checking-the-version-of-azure-powershell). The key steps are included here for convenience.
+To migrate a classic storage account to the Azure Resource Manager deployment model with PowerShell, you must use the Azure PowerShell Service Management module. To learn how to install this module, see [Install and configure the Azure PowerShell Service Management module](/powershell/azure/servicemanagement/install-azure-ps#checking-the-version-of-azure-powershell). The key steps are included here for convenience.
> [!NOTE] > The cmdlets in the Azure Service Management module are for managing legacy Azure resources that use Service Management APIs, including classic storage accounts. This module includes the commands needed to migrate a classic storage account to Azure Resource Manager. > > To manage Azure Resource Manager resources, we recommend that you use the Az PowerShell module. The Az module replaces the deprecated AzureRM module. For more information about moving from the AzureRM module to the Az module, see [Migrate Azure PowerShell scripts from AzureRM to Az](/powershell/azure/migrate-from-azurerm-to-az).
-First, install PowerShellGet if you do not already have it installed. For more information on how to install PowerShellGet, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget#installing-the-latest-version-of-powershellget). After you install PowerShellGet, close and reopen the PowerShell console.
+First, install PowerShellGet if you don't already have it installed. For more information on how to install PowerShellGet, see [Installing PowerShellGet](/powershell/scripting/gallery/installing-psget#installing-the-latest-version-of-powershellget). After you install PowerShellGet, close and reopen the PowerShell console.
Next, install the Azure Service Management module. If you also have the AzureRM module installed, you'll need to include the `-AllowClobber` parameter, as described in [Step 2: Install Azure PowerShell](/powershell/azure/servicemanagement/install-azure-ps#step-2-install-azure-powershell). After the installation is complete, import the Azure Service Management module.
Check the configuration for the prepared storage account with either Azure Power
Move-AzureStorageAccount -Abort -StorageAccountName $accountName ```
-Finally, when you are satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
+Finally, when you're satisfied with the prepared configuration, move forward with the migration and commit the resources with the following command:
```azurepowershell Move-AzureStorageAccount -Commit -StorageAccountName $accountName
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
The [Azure Resource Manager](../../azure-resource-manager/management/overview.md
If you have classic storage accounts, start planning your migration now. Complete it by August 31, 2024, to take advantage of Azure Resource Manager. To learn more about the benefits of Azure Resource Manager, see [The benefits of using Resource Manager](../../azure-resource-manager/management/overview.md#the-benefits-of-using-resource-manager).
-Storage accounts created using the classic deployment model will follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement.
+Storage accounts created using the classic deployment model follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement.
## Why is a migration required?
-On August 31, 2024, we'll retire classic Azure storage accounts and they'll no longer be accessible. Before that date, you'll need to migrate them to Azure Resource Manager, which provides the same capabilities as well as new features, including:
+On August 31, 2024, we'll retire classic Azure storage accounts and they'll no longer be accessible. Before that date, you must migrate them to Azure Resource Manager, which provides all of the same functionality, as well as new features, including:
- A management layer that simplifies deployment by enabling you to create, update, and delete resources. - Resource grouping, which allows you to deploy, monitor, manage, and apply access control policies to resources as a group. - All new features for Azure Storage are implemented for storage account in Azure Resource Manager deployments, so customers that are still using classic resources will no longer have access to new features and updates.
-## How does this affect me?
+## What happens if I don't migrate my accounts?
-On September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager.
+Starting on September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager.
> [!WARNING] > If you do not migrate your classic storage accounts to Azure Resource Manager by August 31, 2024, you will permanently lose access to the data in those accounts.
-Depending on when your subscription was created, you may no longer to be able to create classic storage accounts:
--- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.-- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023-
-We recommend creating storage accounts only in Azure Resource Manager from this point forward.
- ## What actions should I take? To migrate your classic storage accounts, you should: 1. Identify all classic storage accounts in your subscription. 1. Migrate any classic storage accounts to Azure Resource Manager.
-1. Check your applications and logs to determine whether you are dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
+1. Check your applications and logs to determine whether you're dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md). For an in-depth overview of the migration process, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
For step-by-step instructions, see [How to migrate your classic storage accounts
For step-by-step instructions for migrating your classic storage accounts, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md). For an in-depth overview of the migration process, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
-### At what point can classic storage accounts no longer be created?
+### Can I create new classic accounts?
-Subscriptions created after August 2022 are no longer be able to create classic storage accounts. Subscriptions created before August 2022 can continue to create and manage classic storage resources until the retirement date of August 31, 2024.
+Depending on when your subscription was created, you may no longer be able to create classic storage accounts:
+
+- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.
+- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023
+
+We recommend creating storage accounts only in Azure Resource Manager from this point forward.
### What happens to existing classic storage accounts after August 31, 2024?
-After August 31, 2024, you will no longer be able to access data in your classic storage accounts or manage them. It won't be possible to migrate a classic storage account after August 31, 2024.
+After August 31, 2024, you'll no longer be able to access data in your classic storage accounts or manage them. It won't be possible to migrate a classic storage account after August 31, 2024.
### Can Microsoft handle this migration for me?
-No, Microsoft cannot migrate a customer's storage account on their behalf. Customers must use the self-serve options listed above.
+No, Microsoft can't migrate a customer's storage account on their behalf. Customers must use the self-serve options listed above.
### Will there be downtime when migrating my storage account from Classic to Resource Manager?
-There is no downtime to migrate a classic storage account to Resource Manager. However, there is downtime for other scenarios linked to classic virtual machine (VM) migration.
+There's no downtime to migrate a classic storage account to Resource Manager. However, there may be downtime for other scenarios linked to classic virtual machine (VM) migration.
-### What operations are not available during the migration?
+### What operations aren't available during the migration?
-Also, during the migration, management operations are not available on the storage account. Data operations can continue to be performed during the migration.
+Also, during the migration, management operations aren't available on the storage account. Data operations can continue to be performed during the migration.
-If you are creating or managing container objects with the Azure Storage resource provider, note that those operations will be blocked while the migration is underway. For more information, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
+If you're creating or managing container objects with the Azure Storage resource provider, keep in mind that those operations are blocked while the migration is underway. For more information, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
### Are storage account access keys regenerated as part of the migration?
-No, account access keys are not regenerated during the migration. Your access keys and connection strings will continue to work unchanged after the migration is complete.
+No, account access keys aren't regenerated during the migration. Your access keys and connection strings will continue to work unchanged after the migration is complete.
### Are Azure RBAC role assignments maintained through the migration?
Your storage account will be a general-purpose v1 account after the migration pr
### Will the URL of my storage account remain the same post-migration?
-Yes, the migrated storage account will have the same name and address as the classic account.
+Yes, the migrated storage account has the same name and address as the classic account.
-### Can additional verbose logging be added as part of the migration process?
+### Can verbose logging be added as part of the migration process?
No, migration is a service that doesn't have capabilities to provide additional logging.
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
Previously updated : 04/10/2023 Last updated : 04/13/2023
You can migrate your classic storage account with the Azure portal, PowerShell,
Before you start the migration: - Ensure that the storage accounts that you want to migrate don't use any unsupported features or configurations. Usually the platform detects these issues and generates an error.+
+ If you're migrating Azure virtual machines (VMs) that include disks in classic storage accounts, be sure to familiarize yourself with the process of VM migration. For information about unsupported features and configurations, see [Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md#unsupported-features-and-configurations). For a list of errors that may occur in relation to classic disk artifacts, see [Common errors during Classic to Azure Resource Manager migration](../../virtual-machines/migration-classic-resource-manager-errors.md#list-of-errors).
+ - Plan your migration during non-business hours to accommodate for any unexpected failures that might happen during migration. - Evaluate any Azure role-based access control (Azure RBAC) roles that are configured on the classic storage account, and plan for after the migration is complete.
There are four steps to the migration process, as shown in the following diagram
1. **Validate**. During the Validation phase, Azure checks the storage account to ensure that it can be migrated. 1. **Prepare**. In the Prepare phase, Azure creates a new general-purpose v1 storage account and alerts you to any problems that may have occurred. The new account is created in a new resource group in the same region as your classic account.
- At this point your classic storage account still exists. If there are any problems reported, you can correct them or abort the process.
+ At this point, your classic storage account still exists. If there are any problems reported, you can correct them or abort the process.
1. **Check manually**. It's a good idea to make a manual check of the new storage account to make sure that the output is as you expect.
-1. **Commit or abort**. If you are satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
+1. **Commit or abort**. If you're satisfied that the migration has been successful, then you can commit the migration. Committing the migration permanently deletes the classic storage account.
If there are any problems with the migration, then you can abort the migration at this point. If you choose to abort, the new resource group and new storage account are deleted. Your classic account remains available. You can address any problems and attempt the migration again.
There are four steps to the migration process, as shown in the following diagram
### Validate
-The Validation step is the first step in the migration process. The goal of this step is to analyze the state of the resources that you want to migrate from the classic deployment model. The Validation step evaluates whether the resources are capable of migration (success or failure). If the classic storage account is not capable of migration, Azure lists the reasons why.
+The Validation step is the first step in the migration process. The goal of this step is to analyze the state of the resources that you want to migrate from the classic deployment model. The Validation step evaluates whether the resources are capable of migration (success or failure). If the classic storage account isn't capable of migration, Azure lists the reasons why.
The Validation step analyzes the state of resources in the classic deployment model. It checks for failures and unsupported scenarios due to different configurations of the storage account in the classic deployment model.
-The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks. For more information, see the following articles:
+The Validation step doesn't check for VM disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks. For more information, see the following articles:
-- [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)-- [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)-- [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli#step-5-migrate-a-storage-account.md
+- [Migrate classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
+- [Migrate VMs to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)
+- [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli.md#step-5-migrate-a-storage-account)
Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step).
The Prepare step is the second step in the migration process. The goal of this s
> [!IMPORTANT] > Your classic storage account is not modified during this step. It's a safe step to run if you're trying out migration.
-If the storage account is not capable of migration, Azure stops the migration process and lists the reason why the Prepare step failed.
+If the storage account isn't capable of migration, Azure stops the migration process and lists the reason why the Prepare step failed.
-If the storage account is capable of migration, Azure locks management plane operations for the storage account under migration. For example, you cannot regenerate the storage account keys while the Prepare phase is underway. Azure then creates a new resource group as the classic storage account. The name of the new resource group follows the pattern `<classic-account-name>-Migrated`.
+If the storage account is capable of migration, Azure locks management plane operations for the storage account under migration. For example, you can't regenerate the storage account keys while the Prepare phase is underway. Azure then creates a new resource group as the classic storage account. The name of the new resource group follows the pattern `<classic-account-name>-Migrated`.
> [!NOTE] > It is not possible to select the name of the resource group that is created for a migrated storage account. After migration is complete, however, you can use the move feature of Azure Resource Manager to move your migrated storage account to a different resource group. For more information, see [Move resources to a new subscription or resource group](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
-Finally, Azure migrates the storage account and its configuration to a new storage account in Azure Resource Manager in the same region as the classic storage account. At this point your classic storage account still exists. If there are any problems reported during the Prepare step, you can correct them or abort the process.
+Finally, Azure migrates the storage account and its configuration to a new storage account in Azure Resource Manager in the same region as the classic storage account. At this point, your classic storage account still exists. If there are any problems reported during the Prepare step, you can correct them or abort the process.
### Check manually After the Prepare step is complete, both accounts exist in your subscription, so that you can review and compare the classic storage account in the pre-migration state and in Azure Resource Manager. For example, you can examine the new account via the Azure portal to ensure that the storage account's configuration is as expected.
-There is no set window of time before which you need to commit or abort the migration. You can take as much time as you need for the Check phase. However, management plane operations are locked for the classic storage account until you either abort or commit.
+There's no set window of time before which you need to commit or abort the migration. You can take as much time as you need for the Check phase. However, management plane operations are locked for the classic storage account until you either abort or commit.
### Abort
-To revert your changes to the classic deployment model, you can choose to abort the migration. Aborting the migration deletes the new storage account and new resource group. Your classic storage account is not affected if you choose to abort the migration.
+To revert your changes to the classic deployment model, you can choose to abort the migration. Aborting the migration deletes the new storage account and new resource group. Your classic storage account isn't affected if you choose to abort the migration.
> [!CAUTION] > You cannot abort the migration after you have committed the migration. Make sure that you have checked your migrated storage account carefully for errors before you commit. ### Commit
-After you are satisfied that your classic storage account has been migrated successfully, you can commit the migration. Committing the migration deletes your classic storage account. Your data is now available only in the newly migrated account in the Resource Manager deployment model.
+After you're satisfied that your classic storage account has been migrated successfully, you can commit the migration. Committing the migration deletes your classic storage account. Your data is now available only in the newly migrated account in the Resource Manager deployment model.
> [!NOTE] > Committing the migration is an idempotent operation. If it fails, retry the operation. If it continues to fail, create a support ticket or ask a question on [Microsoft Q&A](/answers/https://docsupdatetracker.net/index.html).
Any RBAC role assignments that are scoped to the classic storage account are mai
### Account keys
-The account keys are not changed or rotated during the migration. You do not need to regenerate your account keys after the migration is complete. You will not need to update connection strings in any applications that are using the account keys after the migration.
+The account keys aren't changed or rotated during the migration. You don't need to regenerate your account keys after the migration is complete. You won't need to update connection strings in any applications that are using the account keys after the migration.
### Portal support
-You can manage your migrated storage accounts in the [Azure portal](https://portal.azure.com). You will not be able to use the classic portal to manage your migrated storage accounts.
+You can manage your migrated storage accounts in the [Azure portal](https://portal.azure.com). You won't be able to use the classic portal to manage your migrated storage accounts.
## See also
storage Lock Account Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md
Last updated 03/09/2021 + # Apply an Azure Resource Manager lock to a storage account
storage Scalability Targets Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md
Last updated 12/18/2019 + # Scalability and performance targets for the Azure Storage resource provider
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Last updated 12/12/2022
-+ # Get storage account configuration information
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN Preview is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
-Elastic SAN is designed for large scale IO-intensive workloads and top tier databases such as SQL, MariaDB, and support hosting the workloads on virtual machines, or containers such as Azure Kubernetes Service.
+Elastic SAN is designed for large scale IO-intensive workloads and top tier databases such as SQL, MariaDB, and support hosting the workloads on virtual machines, or containers such as Azure Kubernetes Service. Instead of having to deploy and manage individual storage options for each individual compute deployment, you can provision an Elastic SAN and use the SAN volumes as backend storage for all your workloads. Consolidating your storage like this can be more cost effective if you have a sizeable amount of large scale IO-intensive workloads and top tier databases.
## Benefits of Elastic SAN
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
Data in an Azure Elastic SAN is encrypted and decrypted transparently using 256-
For more information about the cryptographic modules underlying SSE, see [Cryptography API: Next Generation](/windows/desktop/seccng/cng-portal).
-## Protocol compatibility
-
-### iSCSI support
+## iSCSI support
Elastic SAN supports the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. The following iSCSI commands are currently supported:
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
description: Understand cloud tiering, an optional Azure File Sync feature. Freq
Previously updated : 04/13/2021 Last updated : 04/13/2023
Cloud tiering, an optional feature of Azure File Sync, decreases the amount of l
When enabled, this feature stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content. The namespace is stored locally and the file content stored in an Azure file share in the cloud.
-When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from the file share in Azure.
+When a user opens a tiered file, Azure File Sync seamlessly recalls the file data from the Azure file share.
## How cloud tiering works
For example, if your local disk capacity is 200 GiB and you want at least 40 GiB
#### Date policy
-With the **date policy**, cool files are tiered to the cloud if they haven't been accessed (that is, read or written to) for x number of days. For example, if you noticed that files that have gone more than 15 days without being accessed are typically archival files, you should set your date policy to 15 days.
+With the **date policy**, cool files are tiered to the cloud if they haven't been accessed (read or written to) for x number of days. For example, if you notice that files that have gone more than 15 days without being accessed are typically archival files, you should set your date policy to 15 days.
For more examples on how the date policy and volume free space policy work together, see [Choose Azure File Sync cloud tiering policies](file-sync-choose-cloud-tiering-policies.md). ### Windows Server data deduplication
-Data deduplication is supported on volumes that have cloud tiering enabled beginning with Windows Server 2016. For more details, please see [Planning for an Azure File Sync deployment](file-sync-planning.md#data-deduplication).
+Data deduplication is supported on volumes that have cloud tiering enabled beginning with Windows Server 2016. For details, see [Planning for an Azure File Sync deployment](file-sync-planning.md#data-deduplication).
### Cloud tiering heatmap
-Azure File Sync monitors file access (read and write operations) over time and, based on how frequent and recent access is, assigns a heat score to every file. It uses these scores to build a "heatmap" of your namespace on each server endpoint. This heatmap is a list of all syncing files in a location with cloud tiering enabled, ordered by their heat score. Frequently accessed files that were recently opened are considered hot, while files that were barely touched and haven't been accessed for some time are considered cool.
+Azure File Sync monitors file access (read and write operations) over time and assigns a heat score to every file based on how recently and frequently the file is accessed. It uses these scores to build a "heatmap" of your namespace on each server endpoint. This heatmap is a list of all syncing files in a location with cloud tiering enabled, ordered by their heat score. Frequently accessed files that were recently opened are considered hot, while files that were barely touched and haven't been accessed for some time are considered cool.
-To determine the relative position of an individual file in that heatmap, the system uses the maximum of its timestamps, in the following order: MAX(Last Access Time, Last Modified Time, Creation Time).
+To determine the relative position of an individual file in that heatmap, the system uses the maximum of its timestamps, in the following order: MAX (Last Access Time, Last Modified Time, Creation Time).
-Typically, last access time is tracked and available. However, when a new server endpoint is created, with cloud tiering enabled, not enough time has passed to observe file access. If there is no valid last access time, the last modified time is used instead, to evaluate the relative position in the heatmap.
+Typically, last access time is tracked and available. However, when a new server endpoint is created with cloud tiering enabled, not enough time has passed to observe file access. If there's no valid last access time, the last modified time is used instead, to evaluate the relative position in the heatmap.
-The date policy works the same way. Without a last access time, the date policy will act on the last modified time. If that is unavailable, it will fall back to the create time of a file. Over time, the system will observe more file access requests and automatically start to use the self-tracked last access time.
+The date policy works the same way. Without a last access time, the date policy will act on the last modified time. If that's unavailable, it will fall back to the create time of a file. Over time, the system will observe more file access requests and automatically start to use the self-tracked last access time.
> [!NOTE]
-> Cloud tiering does not depend on the NTFS feature for tracking last access time. This NTFS feature is off by default and due to performance considerations, we do not recommend that you manually enable this feature. Cloud tiering tracks last access time separately.
+> Cloud tiering does not depend on the NTFS feature for tracking last access time. This NTFS feature is off by default and due to performance considerations, we don't recommend that you manually enable this feature. Cloud tiering tracks last access time separately.
### Proactive recalling When a file is created or modified, you can proactively recall a file to servers that you specify. Proactive recall makes the new or modified file readily available for consumption in each specified server.
-For example, a globally distributed company has branch offices in the US and in India. In the morning (US time), information workers create a new folder and new files for a brand new project and work all day on it. Azure File Sync will sync folder and files to the Azure file share (cloud endpoint). Information workers in India will continue working on the project in their timezone. When they arrive in the morning, the local Azure File Sync enabled server in India needs to have these new files available locally, such that the India team can efficiently work off of a local cache. Enabling this mode prevents the initial file access to be slower because of on-demand recall and enables the server to proactively recall the files as soon as they were changed or created in the Azure file share.
+For example, a globally distributed company has branch offices in the US and India. In the morning in the US, information workers create a new folder and files for a brand new project, and work all day on it. Azure File Sync will sync folder and files to the Azure file share (cloud endpoint). Information workers in India will continue working on the project in their time zone. When they arrive in the morning, the local Azure File Sync enabled server in India needs to have these new files available locally so the India team can efficiently work off of a local cache. Enabling this mode tells the server to proactively recall the files as soon as they're changed or created in the Azure file share, improving file access times.
-If files recalled to the server are not needed locally, then the unnecessary recall can increase your egress traffic and costs. Therefore, only enable proactive recalling when you know that pre-populating a server's cache with recent changes from the cloud will have a positive effect on users or applications using the files on that server.
+If files recalled to the server aren't needed locally, then the unnecessary recall can increase your egress traffic and costs. Therefore, only enable proactive recalling when you know that pre-populating a server's cache with recent changes from the cloud will have a positive effect on users or applications using the files on that server.
-Enabling proactive recalling may also result in increased bandwidth usage on the server and may cause other relatively new content on the local server to be aggressively tiered due to the increase in files being recalled. In turn, tiering too soon may lead to more recalls if the files being tiered are considered hot by servers.
+Enabling proactive recalling might also result in increased bandwidth usage on the server and could cause other relatively new content on the local server to be aggressively tiered due to the increase in files being recalled. In turn, tiering too soon might lead to more recalls if the files being tiered are considered hot by servers.
:::image type="content" source="media/storage-sync-files-deployment-guide/proactive-download.png" alt-text="An image showing the Azure file share download behavior for a server endpoint currently in effect and a button to open a menu that allows to change it.":::
Cloud tiering is the separation between namespace (the file and folder hierarchy
#### Tiered file
-For tiered files, the size on disk is zero since the file content itself isn't being stored locally. When a file is tiered, the Azure File Sync file system filter (StorageSync.sys) replaces the file locally with a pointer (reparse point). The reparse point represents a URL to the file in the Azure file share. A tiered file has both the "offline" attribute and the FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS attribute set in NTFS so that third-party applications can securely identify tiered files.
+For tiered files, the size on disk is zero because the file content itself isn't being stored locally. When a file is tiered, the Azure File Sync file system filter (StorageSync.sys) replaces the file locally with a pointer called a reparse point. The reparse point represents a URL to the file in the Azure file share. A tiered file has both the `offline` attribute and the `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` attribute set in NTFS so that third-party applications can securely identify tiered files.
![A screenshot of a file's properties when it is tiered - namespace only.](media/storage-sync-cloud-tiering-overview/cloud-tiering-overview-2.png) #### Locally cached file
-On the other hand, for a file stored in an on-premises file server, the size on disk is about equal to the logical size of the file since the entire file (file attributes + file content) is stored locally.
+For files stored in an on-premises file server, the size on disk is about equal to the logical size of the file, because the entire file (file attributes + file content) is stored locally.
![A screenshot of a file's properties when it is not tiered - namespace + file content.](media/storage-sync-cloud-tiering-overview/cloud-tiering-overview-1.png)
-It's also possible for a file to be partially tiered (or partially recalled). In a partially tiered file, only part of the file is stored on disk. You may have partially recalled files on your volume if files are partially read by applications that support streaming access to files. Some examples are multimedia players and zip utilities. Azure File Sync is efficient and recalls only the requested information from the connected Azure file share.
+It's also possible for a file to be partially tiered or partially recalled. In a partially tiered file, only part of the file is stored on disk. You might have partially recalled files on your volume if files are partially read by applications that support streaming access to files. Some examples are multimedia players and zip utilities. Azure File Sync is efficient and recalls only the requested information from the connected Azure file share.
> [!NOTE] > Size represents the logical size of the file. Size on disk represents the physical size of the file stream that's stored on the disk. ## Low disk space mode
-Disks with server endpoints can run out of space for several reasons, even with cloud tiering enabled. This could result in Azure File Sync not working as expected and even unusable. While it is not possible and not in control of Azure File Sync to prevent these occurrences completely, low disk space mode (new for Azure File Sync agent version 15.1) is designed to avoid a server endpoint reaching this situation.
+Disks with server endpoints can run out of space for several reasons, even with cloud tiering enabled. This could result in Azure File Sync not working as expected and even becoming unusable. While it isn't possible for Azure File Sync to prevent these occurrences completely, low disk space mode (new for Azure File Sync agent version 15.1) is designed to avoid a server endpoint reaching this situation.
-For server endpoints with cloud tiering enabled and volume free space policy set, if the free space on the volume reaches below the calculated threshold, then the volume is in low disk space mode.
+For server endpoints with cloud tiering enabled and volume free space policy set, if the free space on the volume drops below the calculated threshold, then the volume is in low disk space mode.
In low disk space mode, the Azure File Sync agent does two things differently: -- Proactive Tiering: In this mode the File Sync agent tiers files proactively to the cloud . Sync agent checks for files to be tiered every minute instead of the normal frequency of every 1 hour. Volume free space policy tiering typically does not happen during initial upload sync until the full upload is complete, but in low disk space mode, tiering is enabled during the initial upload sync and files will be considered for tiering once the individual file has been uploaded to the Azure file share.
+- **Proactive Tiering**: In this mode, the File Sync agent tiers files proactively to the cloud. The sync agent checks for files to be tiered every minute instead of the normal frequency of every hour. Volume free space policy tiering typically doesn't happen during initial upload sync until the full upload is complete; however, in low disk space mode, tiering is enabled during the initial upload sync, and files will be considered for tiering once the individual file has been uploaded to the Azure file share.
-- Non-Persistent Recalls: When a user opens a tiered file, files recalled from the Azure File Share directly will not be persisted to the disk. Note that recalls initiated by the cmdlet Invoke-StorageSyncFileRecall are an exemption from this rule and will be persisted to disk.
+- **Non-Persistent Recalls**: When a user opens a tiered file, files recalled from the Azure file share directly won't be persisted to the disk. Recalls initiated by the `Invoke-StorageSyncFileRecall` cmdlet are an exception to this rule and will be persisted to disk.
-When the volume free space reaches above the threshold, Azure File Sync reverts to the normal state automatically. Note that low disk space mode only applies to servers with cloud tiering enabled and will always respect the volume free space policy.
+When the volume free space surpasses the threshold, Azure File Sync reverts to the normal state automatically. Low disk space mode only applies to servers with cloud tiering enabled and will always respect the volume free space policy.
-If a volume has two server endpoints, one with tiering-enabled and one without tiering, low disk space mode will only apply to the server endpoint where tiering is enabled.
+If a volume has two server endpoints, one with tiering enabled and one without tiering, then low disk space mode will only apply to the server endpoint where tiering is enabled.
### How is the threshold for low disk space mode calculated?
-The threshold is calculated by taking the minimum of the following three numbers:
+Calculate the threshold by taking the minimum of the following three numbers:
- 10% of volume free space in GB -- Volume Free Space Policy in GB-- 20 GB of volume free space
+- Volume Free Space Policy in GB
+- 20 GB of volume free space
The following table includes some examples of how the threshold is calculated and when the volume will be in low disk space mode.
-| Volume Size | Volume Free Space Policy | Current Volume Free Space | Threshold \= Min (10%, Volume Free Space Policy, 20GB) | Is Low Disk Space Mode? | Reason |
+| Volume Size | Volume Free Space Policy | Current Volume Free Space | Threshold \= Min (10%, Volume Free Space Policy, 20 GB) | Is Low Disk Space Mode? | Reason |
| -- | | - | -- | -- | |
-| 100GB | 7% (7GB) | 9% (9GB) | 7GB = Min (10GB, 7GB, 20GB) | No | Current Volume Free Space > Threshold |
-| 100GB | 7% (7GB) | 5% (5GB) | 7GB = Min (10GB, 7GB, 20GB) | Yes | Current Volume Free Space < Threshold |
-| 300GB | 8% (24GB) | 7% (21GB) | 20GB = Min (30GB, 24GB, 20GB) | No | Current Volume Free Space > Threshold |
-| 300GB | 8% (24GB) | 6% (18GB) | 20GB = Min (30GB, 24GB, 20GB) | Yes | Current Volume Free Space < Threshold |
+| 100 GB | 7% (7 GB) | 9% (9 GB) | 7GB = Min (10 GB, 7 GB, 20 GB) | No | Current Volume Free Space > Threshold |
+| 100 GB | 7% (7 GB) | 5% (5 GB) | 7GB = Min (10 GB, 7 GB, 20 GB) | Yes | Current Volume Free Space < Threshold |
+| 300 GB | 8% (24 GB) | 7% (21 GB) | 20GB = Min (30 GB, 24 GB, 20 GB) | No | Current Volume Free Space > Threshold |
+| 300 GB | 8% (24 GB) | 6% (18 GB) | 20GB = Min (30 GB, 24 GB, 20 GB) | Yes | Current Volume Free Space < Threshold |
### How does low disk space mode work with volume free space policy? Low disk space mode always respects the volume free space policy. The threshold calculation is designed to make sure volume free space policy set by the user is respected. ### How to get out of low disk space mode?
-Low disk space mode is designed to revert to normal behavior when volume free space is above the threshold. You can help speed up the process by looking for any recently created files outside the server endpoint location and moving them to a different disk if possible.
+Low disk space mode is designed to revert to normal behavior when volume free space is above the threshold. You can help speed up the process by looking for any recently created files outside the server endpoint location and moving them to a different disk.
### How to check if a server is in Low Disk Space mode? Event ID 19000 is logged to the Telemetry event log every minute for each server endpoint. Use this event to determine if the server endpoint is in low disk mode (IsLowDiskMode = true). The Telemetry event log is located in Event Viewer under Applications and Services\Microsoft\FileSync\Agent.
storage File Sync Troubleshoot Cloud Tiering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md
description: Troubleshoot common issues with cloud tiering in an Azure File Sync
Previously updated : 7/28/2022 Last updated : 4/12/2023
There are two main classes of failures that can happen via either failure path:
- *Inaccessible Azure file share*. This failure typically happens when you delete the Azure file share when it is still a cloud endpoint in a sync group. - *Inaccessible storage account*. This failure typically happens when you delete the storage account while it still has an Azure file share that is a cloud endpoint in a sync group. - Server failures
- - *Azure File Sync file system filter (StorageSync.sys) is not loaded*. In order to respond to tiering/recall requests, the Azure File Sync file system filter must be loaded. The filter not being loaded can happen for several reasons, but the most common reason is that an administrator unloaded it manually. The Azure File Sync file system filter must be loaded at all times for Azure File Sync to properly function.
+ - *Azure File Sync file system filter (StorageSync.sys) isn't loaded*. In order to respond to tiering/recall requests, the Azure File Sync file system filter must be loaded. The filter not being loaded can happen for several reasons, but the most common reason is that an administrator unloaded it manually. The Azure File Sync file system filter must be loaded at all times for Azure File Sync to properly function.
- *Missing, corrupt, or otherwise broken reparse point*. A reparse point is a special data structure on a file that consists of two parts:
- 1. A reparse tag, which indicates to the operating system that the Azure File Sync file system filter (StorageSync.sys) may need to do some action on IO to the file.
+ 1. A reparse tag, which indicates to the operating system that the Azure File Sync file system filter (StorageSync.sys) might need to do some action on IO to the file.
2. Reparse data, which indicates to the file system filter the URI of the file on the associated cloud endpoint (the Azure file share). The most common way a reparse point could become corrupted is if an administrator attempts to modify either the tag or its data.
There are two main classes of failures that can happen via either failure path:
The following sections indicate how to troubleshoot cloud tiering issues and determine if an issue is a cloud storage issue or a server issue. ## How to monitor tiering activity on a server
-To monitor tiering activity on a server, use Event ID 9003, 9016 and 9029 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
+To monitor tiering activity on a server, use Event ID 9003, 9016, and 9029 in the Telemetry event log (located under `Applications and Services\Microsoft\FileSync\Agent` in Event Viewer).
- Event ID 9003 provides error distribution for a server endpoint. For example, Total Error Count, ErrorCode, etc. Note, one event is logged per error code. - Event ID 9016 provides ghosting results for a volume. For example, Free space percent is, Number of files ghosted in session, Number of files failed to ghost, etc. - Event ID 9029 provides ghosting session information for a server endpoint. For example, Number of files attempted in the session, Number of files tiered in the session, Number of files already tiered, etc. ## How to monitor recall activity on a server
-To monitor recall activity on a server, use Event ID 9005, 9006, 9009 and 9059 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
+To monitor recall activity on a server, use Event ID 9005, 9006, 9009, and 9059 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer).
- Event ID 9005 provides recall reliability for a server endpoint. For example, Total unique files accessed, Total unique files with failed access, etc. - Event ID 9006 provides recall error distribution for a server endpoint. For example, Total Failed Requests, ErrorCode, etc. Note, one event is logged per error code.
If files fail to tier to Azure Files:
||-|--|-|-| | 0x80c86045 | -2134351803 | ECS_E_INITIAL_UPLOAD_PENDING | The file failed to tier because the initial upload is in progress. | No action required. The file will be tiered once the initial upload completes. | | 0x80c86043 | -2134351805 | ECS_E_GHOSTING_FILE_IN_USE | The file failed to tier because it's in use. | No action required. The file will be tiered when it's no longer in use. |
-| 0x80c80241 | -2134375871 | ECS_E_GHOSTING_EXCLUDED_BY_SYNC | The file failed to tier because it's excluded by sync. | No action required. Files in the sync exclusion list cannot be tiered. |
-| 0x80c86042 | -2134351806 | ECS_E_GHOSTING_FILE_NOT_FOUND | The file failed to tier because it was not found on the server. | No action required. If the error persists, check if the file exists on the server. |
+| 0x80c80241 | -2134375871 | ECS_E_GHOSTING_EXCLUDED_BY_SYNC | The file failed to tier because it's excluded by sync. | No action required. Files in the sync exclusion list can't be tiered. |
+| 0x80c86042 | -2134351806 | ECS_E_GHOSTING_FILE_NOT_FOUND | The file failed to tier because it wasn't found on the server. | No action required. If the error persists, check if the file exists on the server. |
| 0x80c83053 | -2134364077 | ECS_E_CREATE_SV_FILE_DELETED | The file failed to tier because it was deleted in the Azure file share. | No action required. The file should be deleted on the server when the next download sync session runs. | | 0x80c8600e | -2134351858 | ECS_E_AZURE_SERVER_BUSY | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. |
If files fail to tier to Azure Files:
| 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. | | 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. | | 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
-| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) is not running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the fltmc command, uninstall the Azure File Sync agent, restart the server and reinstall the Azure File Sync agent. |
+| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) isn't running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the `fltmc` command, uninstall the Azure File Sync agent, restart the server, and reinstall the Azure File Sync agent. |
| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to tier due to insufficient disk space on the volume where the server endpoint is located. | To resolve this issue, free at least 100 MiB of disk space on the volume where the server endpoint is located. | | 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. |
-| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication are not supported and will not be tiered. |
+| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication aren't supported and won't be tiered. |
| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. | | 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_FOUND | The file failed to tier because it has not synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. | | 0x80072ee2 | -2147012894 | WININET_E_TIMEOUT | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | | 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. | | 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database is not running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
-| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file cannot be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
+| 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database isn't running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. |
+| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync. |
| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. | ## How to troubleshoot files that fail to be recalled
If files fail to be recalled:
| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation | ||-|--|-|-|
-| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. |
+| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity, or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. |
| 0x80070036 | -2147024842 | ERROR_NETWORK_BUSY | The file failed to recall due to a network issue. | If the error persists, check network connectivity to the Azure file share. |
-| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
+| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files aren't accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. | | 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). | | 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#troubleshoot-rbac). |
-| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share is not accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. |
+| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share isn't accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. |
| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to recall due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | | 0x8007000e | -2147024882 | ERROR_OUTOFMEMORY | The file failed to recall due to insufficient memory. | If the error persists, investigate which application or kernel-mode driver is causing the low memory condition. |
-| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the Invoke-StorageSyncCloudTiering cmdlet. |
+| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the `Invoke-StorageSyncCloudTiering` cmdlet. |
| 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. | | 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you are certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. | | 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. | ## Tiered files are not accessible on the server after deleting a server endpoint
-Tiered files on a server will become inaccessible if the files are not recalled prior to deleting a server endpoint.
+Tiered files on a server will become inaccessible if the files aren't recalled prior to deleting a server endpoint.
-Errors logged if tiered files are not accessible
+Errors logged if tiered files aren't accessible
- When syncing a file, error code -2147942467 (0x80070043 - ERROR_BAD_NET_NAME) is logged in the ItemResults event log - When recalling a file, error code -2134376393 (0x80c80037 - ECS_E_SYNC_SHARE_NOT_FOUND) is logged in the RecallResults event log Restoring access to your tiered files is possible if the following conditions are met: - Server endpoint was deleted within past 30 days-- Cloud endpoint was not deleted -- File share was not deleted-- Sync group was not deleted
+- Cloud endpoint wasn't deleted
+- File share wasn't deleted
+- Sync group wasn't deleted
If the above conditions are met, you can restore access to the files on the server by recreating the server endpoint at the same path on the server within the same sync group within 30 days.
-If the above conditions are not met, restoring access is not possible as these tiered files on the server are now orphaned. Follow the instructions below to remove the orphaned tiered files.
+If the above conditions aren't met, restoring access isn't possible as these tiered files on the server are now orphaned. Follow these instructions to remove the orphaned tiered files.
**Notes**-- When tiered files are not accessible on the server, the full file should still be accessible if you access the Azure file share directly.
+- When tiered files aren't accessible on the server, the full file should still be accessible if you access the Azure file share directly.
- To prevent orphaned tiered files in the future, follow the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md) when deleting a server endpoint. <a id="get-orphaned"></a>**How to get the list of orphaned tiered files**
Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.Se
$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> $orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt ```
-2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they're deleted.
<a id="remove-orphaned"></a>**How to remove orphaned tiered files**
$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt
This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to recreation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated. 1. Back up the Azure file share and server endpoint location.
-2. Remove the server endpoint in the sync group (if exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
+2. Remove the server endpoint in the sync group (if it exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md).
> [!Warning]
-> If the server endpoint is not removed prior to using the Remove-StorageSyncOrphanedTieredFiles cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share.
+> If the server endpoint isn't removed prior to using the `Remove-StorageSyncOrphanedTieredFiles` cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share.
3. Run the following PowerShell commands to list orphaned tiered files:
Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.Se
$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> $orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt ```
-4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they are deleted.
+4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they're deleted.
5. Run the following PowerShell commands to delete orphaned tiered files: ```powershell
$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt
``` **Notes** - Tiered files modified on the server that are not synced to the Azure file share will be deleted.-- Tiered files that are accessible (not orphan) will not be deleted.
+- Tiered files that are accessible (not orphan) won't be deleted.
- Non-tiered files will remain on the server. 6. Optional: Recreate the server endpoint if deleted in step 3.
storage File Sync Troubleshoot Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md
description: Troubleshoot common issues with monitoring sync health and resolvin
Previously updated : 6/2/2022 Last updated : 4/12/2022
This issue is expected if you create a cloud endpoint and use an Azure file shar
### <a id="broken-sync"></a>How do I monitor sync health? # [Portal](#tab/portal1)
-Within each sync group, you can drill down into its individual server endpoints to see the status of the last completed sync sessions. A green Health column and a Files Not Syncing value of 0 indicate that sync is working as expected. If not, see below for a list of common sync errors and how to handle files that are not syncing.
+Within each sync group, you can drill down into its individual server endpoints to see the status of the last completed sync sessions. A green Health column and a Files Not Syncing value of 0 indicate that sync is working as expected. If not, see below for a list of common sync errors and how to handle files that aren't syncing.
![A screenshot of the Azure portal](media/storage-sync-files-troubleshoot/portal-sync-health.png) # [Server](#tab/server)
-Go to the server's telemetry logs, which can be found in the Event Viewer at `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`. Event 9102 corresponds to a completed sync session; for the latest status of sync, look for the most recent event with ID 9102. SyncDirection tells you if this session was an upload or download. If the `HResult` is 0, then the sync session was successful. A non-zero `HResult` means that there was an error during sync; see below for a list of common errors. If the PerItemErrorCount is greater than 0, then some files or folders did not sync properly. It is possible to have an `HResult` of 0 but a PerItemErrorCount that is greater than 0.
+Go to the server's telemetry logs, which can be found in the Event Viewer at `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`. Event 9102 corresponds to a completed sync session; for the latest status of sync, look for the most recent event with ID 9102. `SyncDirection` tells you if this session was an upload or download. If the `HResult` is 0, then the sync session was successful. A non-zero `HResult` means that there was an error during sync; see below for a list of common errors. If the `PerItemErrorCount` is greater than 0, then some files or folders didn't sync properly. It's possible to have an `HResult` of 0 but a `PerItemErrorCount` that is greater than 0.
-Below is an example of a successful upload. For the sake of brevity, only some of the values contained in each 9102 event are listed below.
+Below is an example of a successful upload. For the sake of brevity, only some of the values contained in each 9102 event are listed.
``` Replica Sync session completed.
PerItemErrorCount: 0,
TransferredFiles: 0, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0. ```
-Sometimes sync sessions fail overall or have a non-zero PerItemErrorCount but still make forward progress, with some files syncing successfully. Progress can be determined by looking into the *Applied* fields (AppliedFileCount, AppliedDirCount, AppliedTombstoneCount, and AppliedSizeBytes). These fields describe how much of the session is succeeding. If you see multiple sync sessions in a row that are failing but have an increasing *Applied* count, then you should give sync time to try again before opening a support ticket.
+Sometimes sync sessions fail overall or have a non-zero `PerItemErrorCount` but still make forward progress, with some files syncing successfully. Progress can be determined by looking into the *Applied* fields (`AppliedFileCount`, `AppliedDirCount`, `AppliedTombstoneCount`, and `AppliedSizeBytes`). These fields describe how much of the session is succeeding. If you see multiple sync sessions in a row that are failing but have an increasing *Applied* count, then you should give sync time to try again before opening a support ticket.
### How do I monitor the progress of a current sync session? # [Portal](#tab/portal1)
-Within your sync group, go to the server endpoint in question and look at the Sync Activity section to see the count of files uploaded or downloaded in the current sync session. Keep in mind that this status will be delayed by about 5 minutes, and if your sync session is small enough to be completed within this period, it may not be reported in the portal.
+Within your sync group, go to the server endpoint in question and look at the Sync Activity section to see the count of files uploaded or downloaded in the current sync session. Keep in mind that this status will be delayed by about 5 minutes. If your sync session is small enough to be completed within this period, it might not be reported in the portal.
# [Server](#tab/server)
-Look at the most recent 9302 event in the telemetry log on the server (in the Event Viewer, go to Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry). This event indicates the state of the current sync session. TotalItemCount denotes how many files are to be synced, AppliedItemCount the number of files that have been synced so far, and PerItemErrorCount the number of files that are failing to sync (see below for how to deal with this).
+Look at the most recent 9302 event in the telemetry log on the server (in the Event Viewer, go to Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry). This event indicates the state of the current sync session. `TotalItemCount` denotes how many files are to be synced, `AppliedItemCount` the number of files that have been synced so far, and `PerItemErrorCount` the number of files that are failing to sync (see below for how to deal with this).
``` Replica Sync Progress.
For each server in a given sync group, make sure:
Look at the completed sync sessions, which are marked by 9102 events in the telemetry event log for each server (in the Event Viewer, go to `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`). 1. On any given server, you want to make sure the latest upload and download sessions completed successfully. To do this, check that the `HResult` and PerItemErrorCount are 0 for both upload and download (the SyncDirection field indicates if a given session is an upload or download session). Note that if you do not see a recently completed sync session, it is likely a sync session is currently in progress, which is to be expected if you just added or modified a large amount of data.
-2. When a server is fully up to date with the cloud and has no changes to sync in either direction, you will see empty sync sessions. These are indicated by upload and download events in which all the Sync* fields (SyncFileCount, SyncDirCount, SyncTombstoneCount, and SyncSizeBytes) are zero, meaning there was nothing to sync. Note that these empty sync sessions may not occur on high-churn servers as there is always something new to sync. If there is no sync activity, they should occur every 30 minutes.
+2. When a server is fully up to date with the cloud and has no changes to sync in either direction, you will see empty sync sessions. These are indicated by upload and download events in which all the Sync* fields (`SyncFileCount`, `SyncDirCount`, `SyncTombstoneCount`, and `SyncSizeBytes`) are zero, meaning there was nothing to sync. Note that these empty sync sessions might not occur on high-churn servers as there is always something new to sync. If there is no sync activity, they should occur every 30 minutes.
3. If all servers are up to date with the cloud, meaning their recent upload and download sessions are empty sync sessions, you can say with reasonable certainty that the system as a whole is in sync.
-If you made changes directly in your Azure file share, Azure File Sync will not detect these changes until change enumeration runs, which happens once every 24 hours. It is possible that a server will say it is up to date with the cloud when it is in fact missing recent changes made directly in the Azure file share.
+If you made changes directly in your Azure file share, Azure File Sync will not detect these changes until change enumeration runs, which happens once every 24 hours. It's possible that a server will say it is up to date with the cloud when it is in fact missing recent changes made directly in the Azure file share.
### How do I see if there are specific files or folders that are not syncing?
-If your PerItemErrorCount on the server or Files Not Syncing count in the portal are greater than 0 for any given sync session, that means some items are failing to sync. Files and folders can have characteristics that prevent them from syncing. These characteristics can be persistent and require explicit action to resume sync, for example removing unsupported characters from the file or folder name. They can also be transient, meaning the file or folder will automatically resume sync; for example, files with open handles will automatically resume sync when the file is closed. When the Azure File Sync engine detects such a problem, an error log is produced that can be parsed to list the items currently not syncing properly.
+If your `PerItemErrorCount` on the server or Files Not Syncing count in the portal are greater than 0 for any given sync session, that means some items are failing to sync. Files and folders can have characteristics that prevent them from syncing. These characteristics can be persistent and require explicit action to resume sync, for example removing unsupported characters from the file or folder name. They can also be transient, meaning the file or folder will automatically resume sync; for example, files with open handles will automatically resume sync when the file is closed. When the Azure File Sync engine detects such a problem, an error log is produced that can be parsed to list the items currently not syncing properly.
-To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (located in the agent installation directory of the Azure File Sync agent) to identify files that failed to sync because of open handles, unsupported characters, or other issues. The ItemPath field tells you the location of the file in relation to the root sync directory. See the list of common sync errors below for remediation steps.
+To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (located in the agent installation directory of the Azure File Sync agent) to identify files that failed to sync because of open handles, unsupported characters, or other issues. The `ItemPath` field tells you the location of the file in relation to the root sync directory. See the list of common sync errors for remediation steps.
> [!Note]
-> If the FileSyncErrorsReport.ps1 script returns "There were no file errors found" or does not list per-item errors for the sync group, the cause is either:
+> If the FileSyncErrorsReport.ps1 script returns "There were no file errors found" or doesn't list per-item errors for the sync group, the cause is either:
>
->- Cause 1: The last completed sync session did not have per-item errors. The portal should be updated soon to show 0 Files Not Syncing. By default, the FileSyncErrorsReport.ps1 script will only show per-item errors for the last completed sync session. To view per-item errors for all sync sessions, use the -ReportAllErrors parameter.
-> - Check the most recent [Event ID 9102](?tabs=server%252cazure-portal#broken-sync) in the Telemetry event log to confirm the PerItemErrorCount is 0.
+>- Cause 1: The last completed sync session didn't have per-item errors. The portal should be updated soon to show 0 Files Not Syncing. By default, the FileSyncErrorsReport.ps1 script will only show per-item errors for the last completed sync session. To view per-item errors for all sync sessions, use the `-ReportAllErrors` parameter.
+> - Check the most recent [Event ID 9102](?tabs=server%252cazure-portal#broken-sync) in the Telemetry event log to confirm the `PerItemErrorCount` is 0.
>
->- Cause 2: The ItemResults event log on the server wrapped due to too many per-item errors and the event log no longer contains errors for this sync group.
-> - To prevent this issue, increase the ItemResults event log size. The ItemResults event log can be found under "Applications and Services Logs\Microsoft\FileSync\Agent" in Event Viewer.
+>- Cause 2: The `ItemResults` event log on the server wrapped due to too many per-item errors and the event log no longer contains errors for this sync group.
+> - To prevent this issue, increase the `ItemResults` event log size. The `ItemResults` event log can be found under "Applications and Services Logs\Microsoft\FileSync\Agent" in Event Viewer.
## Sync errors
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation | ||-|--|-|-|
-| 0x80070043 | -2147942467 | ERROR_BAD_NET_NAME | The tiered file on the server is not accessible. This issue occurs if the tiered file was not recalled prior to deleting a server endpoint. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
-| 0x80c80207 | -2134375929 | ECS_E_SYNC_CONSTRAINT_CONFLICT | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
-| 0x80C8028A | -2134375798 | ECS_E_SYNC_CONSTRAINT_CONFLICT_ON_FAILED_DEPENDEE | The file or directory change cannot be synced yet because a dependent folder is not yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder is not yet synced. |
-| 0x80c80284 | -2134375804 | ECS_E_SYNC_CONSTRAINT_CONFLICT_SESSION_FAILED | The file or directory change cannot be synced yet because a dependent folder is not yet synced and the sync session failed. This item will sync after the dependent changes are synced. | No action required. If the error persists, investigate the sync session failure. |
+| 0x80070043 | -2147942467 | ERROR_BAD_NET_NAME | The tiered file on the server isn't accessible. This issue occurs if the tiered file was not recalled prior to deleting a server endpoint. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
+| 0x80c80207 | -2134375929 | ECS_E_SYNC_CONSTRAINT_CONFLICT | The file or directory change can't be synced yet because a dependent folder isn't yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder isn't yet synced. |
+| 0x80C8028A | -2134375798 | ECS_E_SYNC_CONSTRAINT_CONFLICT_ON_FAILED_DEPENDEE | The file or directory change can't be synced yet because a dependent folder isn't yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder isn't yet synced. |
+| 0x80c80284 | -2134375804 | ECS_E_SYNC_CONSTRAINT_CONFLICT_SESSION_FAILED | The file or directory change can't be synced yet because a dependent folder isn't yet synced and the sync session failed. This item will sync after the dependent changes are synced. | No action required. If the error persists, investigate the sync session failure. |
| 0x8007007b | -2147024773 | ERROR_INVALID_NAME | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. | | 0x80c80255 | -2134375851 | ECS_E_XSMB_REST_INCOMPATIBILITY | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
-| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. |
-| 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change has not yet been detected by sync. Sync will recover after this change is detected. | No action required. |
-| 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. |
-| 0x80070003 | -2147942403 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory cannot be synced because the item was already deleted in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
+| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file can't be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. |
+| 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change hasn't yet been detected by sync. Sync will recover after this change is detected. | No action required. |
+| 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. |
+| 0x80070003 | -2147942403 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory can't be synced because the item was already deleted in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. |
| 0x80c80205 | -2134375931 | ECS_E_SYNC_ITEM_SKIP | The file or directory was skipped but will be synced during the next sync session. If this error is reported when downloading the item, the file or directory name is more than likely invalid. | No action required if this error is reported when uploading the file. If the error is reported when downloading the file, rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. |
-| 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory cannot be synced because the item already exists in the destination and sync is not aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. |
-| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
-| 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file cannot be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
+| 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory can't be synced because the item already exists in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. |
+| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file can't be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
+| 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file can't be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. |
| 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. | | 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure File Sync doesn't support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. | | 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. |
-| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled, and the server doesn't have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
-| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file cannot be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. |
-| 0x8000ffff | -2147418113 | E_UNEXPECTED | The file cannot be synced due to an unexpected error. | If the error persists for several days, please open a support case. |
-| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
+| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file can't be synced because the firewall and virtual network settings on the storage account are enabled, and the server doesn't have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file can't be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. |
+| 0x8000ffff | -2147418113 | E_UNEXPECTED | The file can't be synced due to an unexpected error. | If the error persists for several days, please open a support case. |
+| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file can't be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file was changed during sync, so it needs to be synced again. | No action required. |
-| 0x80070017 | -2147024873 | ERROR_CRC | The file cannot be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. |
-| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file cannot be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
-| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory cannot be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. |
-| 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server is not accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
+| 0x80070017 | -2147024873 | ERROR_CRC | The file can't be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. |
+| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file can't be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. |
+| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory can't be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. |
+| 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server isn't accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. |
### Handling unsupported characters If the **FileSyncErrorsReport.ps1** PowerShell script shows per-item sync errors due to unsupported characters (error code 0x8007007b or 0x80c80255), you should remove or rename the characters at fault from the respective file names. PowerShell will likely print these characters as question marks or empty rectangles since most of these characters have no standard visual encoding.
The table below contains all of the unicode characters Azure File Sync does not
| **Error string** | ERROR_CANCELLED | | **Remediation required** | No |
-Sync sessions may fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it is safe to ignore this error unless it persists over a period of several hours.
+Sync sessions might fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it's safe to ignore this error unless it persists over a period of several hours.
<a id="-2147012889"></a>**A connection with the service could not be established.**
Sync sessions may fail for various reasons including the server being restarted
[!INCLUDE [storage-sync-files-bad-connection](../../../includes/storage-sync-files-bad-connection.md)] > [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
<a id="-2134376372"></a>**The user request was throttled by the service.**
This error typically occurs when a backup application creates a VSS snapshot and
| **Error string** | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | | **Remediation required** | Yes |
-This error occurs because the Azure File Sync agent cannot access the Azure file share, which may be because the Azure file share or the storage account hosting it no longer exists. You can troubleshoot this error by working through the following steps:
+This error occurs because the Azure File Sync agent can't access the Azure file share, which might be because the Azure file share or the storage account hosting it no longer exists. You can troubleshoot this error by working through the following steps:
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share) 3. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac) 4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
-<a id="-2134351804"></a>**Sync failed because the request is not authorized to perform this operation.**
+<a id="-2134351804"></a>**Sync failed because the request isn't authorized to perform this operation.**
| Error | Code | |-|-|
This error occurs because the Azure File Sync agent cannot access the Azure file
| **Error string** | ECS_E_AZURE_AUTHORIZATION_FAILED | | **Remediation required** | Yes |
-This error occurs because the Azure File Sync agent is not authorized to access the Azure file share. You can troubleshoot this error by working through the following steps:
+This error occurs because the Azure File Sync agent isn't authorized to access the Azure file share. You can troubleshoot this error by working through the following steps:
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
This error occurs because the Azure File Sync agent is not authorized to access
3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) > [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
<a id="-2134364022"></a><a id="storage-unknown-error"></a>**An unknown error occurred while accessing the storage account.**
This error occurs because the storage account has a read-only [resource lock](..
This error occurs when there is a problem with the internal database used by Azure File Sync. When this issue occurs, create a support request and we will contact you to help you resolve this issue.
-<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server is not supported.**
+<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server isn't supported.**
| Error | Code | |-|-|
This error occurs when there is a problem with the internal database used by Azu
| **Error string** | ECS_E_AGENT_VERSION_BLOCKED | | **Remediation required** | Yes |
-This error occurs if the Azure File Sync agent version installed on the server is not supported. To resolve this issue, [upgrade](file-sync-release-notes.md#azure-file-sync-agent-update-policy) to a [supported agent version](file-sync-release-notes.md#supported-versions).
+This error occurs if the Azure File Sync agent version installed on the server isn't supported. To resolve this issue, [upgrade](file-sync-release-notes.md#azure-file-sync-agent-update-policy) to a [supported agent version](file-sync-release-notes.md#supported-versions).
<a id="-2134351810"></a>**You reached the Azure file share storage limit.**
Sync sessions fail with either of these errors when the Azure file share storage
![A screenshot of the Azure file share properties.](media/storage-sync-files-troubleshoot/file-share-limit-reached-1.png)
-If the share is full and a quota is not set, one possible way of fixing this issue is to make each subfolder of the current server endpoint into its own server endpoint in their own separate sync groups. This way each subfolder will sync to individual Azure file shares.
+If the share is full and a quota isn't set, one possible way of fixing this issue is to make each subfolder of the current server endpoint into its own server endpoint in their own separate sync groups. This way each subfolder will sync to individual Azure file shares.
<a id="-2134351824"></a>**The Azure file share cannot be found.**
If the share is full and a quota is not set, one possible way of fixing this iss
| **Error string** | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | | **Remediation required** | Yes |
-This error occurs when the Azure file share is not accessible. To troubleshoot:
+This error occurs when the Azure file share isn't accessible. To troubleshoot:
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
If this error persists for longer than a few hours, create a support request and
| **Error string** | CERT_E_UNTRUSTEDROOT | | **Remediation required** | Yes |
-This error can happen if your organization is using a TLS terminating proxy or if a malicious entity is intercepting the traffic between your server and the Azure File Sync service. If you are certain that this is expected (because your organization is using a TLS terminating proxy), you skip certificate verification with a registry override.
+This error can happen if your organization is using a TLS terminating proxy or if a malicious entity is intercepting the traffic between your server and the Azure File Sync service. If you're certain that this is expected (because your organization is using a TLS terminating proxy), you skip certificate verification with a registry override.
1. Create the SkipVerifyingPinnedRootCertificate registry value.
By setting this registry value, the Azure File Sync agent will accept any locall
[!INCLUDE [storage-sync-files-bad-connection](../../../includes/storage-sync-files-bad-connection.md)] > [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service, or make a change to a file or directory within the server endpoint location.
<a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service**
Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncSer
| **Error string** | ECS_E_AUTH_SRV_CERT_NOT_FOUND | | **Remediation required** | Yes |
-This error occurs because the certificate used for authentication is not found.
+This error occurs because the certificate used for authentication isn't found.
To resolve this issue, run the following PowerShell command on the server:
This error occurs because the server endpoint deletion failed and the endpoint i
Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit.
-<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service is not yet ready to sync with this server endpoint.**
+<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service isn't yet ready to sync with this server endpoint.**
| Error | Code | |-|-|
Sync sessions fail with one of these errors because either the volume has insuff
| **Error string** | ECS_E_REPLICA_NOT_READY | | **Remediation required** | No |
-This error occurs because the cloud endpoint was created with content already existing on the Azure file share. Azure File Sync must scan the Azure file share for all content before allowing the server endpoint to proceed with its initial synchronization.
+This error occurs because the cloud endpoint was created with content already existing on the Azure file share. Azure File Sync must scan the Azure file share for all content before allowing the server endpoint to proceed with its initial synchronization. Once change detection completes on the Azure file share, sync will commence. Change detection can take longer than 24 hours to complete, and is proportional to the number of files and directories on your Azure file share. If cloud tiering is configured, files will be tiered after sync completes.
<a id="-2134375877"></a><a id="-2134375908"></a><a id="-2134375853"></a>**Sync failed due to problems with many individual files.**
Sync sessions fail with one of these errors when there are many files that are f
| **Error string** | ECS_E_SYNC_INVALID_PATH | | **Remediation required** | Yes |
-Ensure the path exists, is on a local NTFS volume, and is not a reparse point or existing server endpoint.
+Ensure the path exists, is on a local NTFS volume, and isn't a reparse point or existing server endpoint.
-<a id="-2134375817"></a>**Sync failed because the filter driver version is not compatible with the agent version**
+<a id="-2134375817"></a>**Sync failed because the filter driver version isn't compatible with the agent version**
| Error | Code | |-|-|
Ensure the path exists, is on a local NTFS volume, and is not a reparse point or
| **Error string** | ECS_E_INCOMPATIBLE_FILTER_VERSION | | **Remediation required** | Yes |
-This error occurs because the Cloud Tiering filter driver (StorageSync.sys) version loaded is not compatible with the Storage Sync Agent (FileSyncSvc) service. If the Azure File Sync agent was upgraded, restart the server to complete the installation. If the error continues to occur, uninstall the agent, restart the server and reinstall the Azure File Sync agent.
+This error occurs because the Cloud Tiering filter driver (StorageSync.sys) version loaded isn't compatible with the Storage Sync Agent (FileSyncSvc) service. If the Azure File Sync agent was upgraded, restart the server to complete the installation. If the error continues to occur, uninstall the agent, restart the server and reinstall the Azure File Sync agent.
<a id="-2134376373"></a>**The service is currently unavailable.**
This error occurs because the Cloud Tiering filter driver (StorageSync.sys) vers
This error occurs because the Azure File Sync service is unavailable. This error will auto-resolve when the Azure File Sync service is available again. > [!Note]
-> Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
+> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
<a id="-2146233088"></a>**Sync failed due to an exception.**
This error occurs because sync failed due to an exception. If the error persists
| **Error string** | ECS_E_STORAGE_ACCOUNT_FAILED_OVER | | **Remediation required** | Yes |
-This error occurs because the storage account has failed over to another region. Azure File Sync does not support the storage account failover feature. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. To resolve this issue, move the storage account to the primary region.
+This error occurs because the storage account has failed over to another region. Azure File Sync does not support the storage account failover feature. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and might also cause unexpected data loss in the case of newly tiered files. To resolve this issue, move the storage account to the primary region.
<a id="-2134375922"></a>**Sync failed due to a transient problem with the sync database.**
Verify you have the latest Azure File Sync agent version installed and give the
| **Error string** | ECS_E_MGMT_STORAGEACLSBYPASSNOTSET | | **Remediation required** | Yes |
-This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide.
+This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception isn't checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide.
<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.**
This error occurs if the firewall and virtual network settings are enabled on th
| **Error string** | ERROR_ACCESS_DENIED | | **Remediation required** | Yes |
-This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
+This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account doesn't have permissions to the System Volume Information folder on the volume where the server endpoint is located. If individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section.
1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). 2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
This error can occur if Azure File Sync cannot access the storage account due to
| **Error string** | ECS_E_SYNC_REPLICA_ROOT_CHANGED | | **Remediation required** | Yes |
-This error occurs because Azure File Sync does not support deleting and recreating an Azure file share in the same sync group.
+This error occurs because Azure File Sync doesn't support deleting and recreating an Azure file share in the same sync group.
To resolve this issue, delete and recreate the sync group by performing the following steps:
To resolve this issue, delete and recreate the sync group by performing the foll
| **Error string** | ECS_E_SYNC_REPLICA_BACK_IN_TIME | | **Remediation required** | No |
-No action is required. This error occurs because sync detected the replica has been restored to an older state. Sync will now enter a reconciliation mode, where it recreates the sync relationship by merging the contents of the Azure file share and the data on the server endpoint. When reconciliation mode is triggered, the process can be very time consuming depending upon the namespace size. Regular synchronization does not happen until the reconciliation finishes, and files that are different (last modified time or size) between the Azure file share and server endpoint will result in file conflicts.
+No action is required. This error occurs because sync detected the replica has been restored to an older state. Sync will now enter a reconciliation mode, where it recreates the sync relationship by merging the contents of the Azure file share and the data on the server endpoint. When reconciliation mode is triggered, the process can be very time consuming depending upon the namespace size. Regular synchronization doesn't happen until the reconciliation finishes, and files that are different (last modified time or size) between the Azure file share and server endpoint will result in file conflicts.
<a id="-2145844941"></a>**Sync failed because the HTTP request was redirected**
No action is required. This error occurs because sync detected the replica has b
| **Error string** | HTTP_E_STATUS_REDIRECT_KEEP_VERB | | **Remediation required** | Yes |
-This error occurs because Azure File Sync does not support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device.
+This error occurs because Azure File Sync doesn't support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device.
<a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.**
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
* <a id="afs-conflict-resolution"></a> **If the same file is changed on two servers at approximately the same time, what happens?**
- Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name. The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the filename. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name is **Cloud**. The name follows this taxonomy:
+ Azure File Sync uses a simple conflict-resolution strategy: we keep both changes to files that are changed in two endpoints at the same time. The most recently written change keeps the original file name. The older file (determined by LastWriteTime) has the endpoint name and the conflict number appended to the file name. For server endpoints, the endpoint name is the name of the server. For cloud endpoints, the endpoint name is **Cloud**. The name follows this taxonomy:
\<FileNameWithoutExtension\>-\<endpointName\>\[-#\].\<ext\>
* <a id="afs-tiered-files-tiering-disabled"></a> **I have cloud tiering disabled, why are there tiered files in the server endpoint location?**
- There are two reasons why tiered files may exist in the server endpoint location:
+ There are two reasons why tiered files might exist in the server endpoint location:
- - When adding a new server endpoint to an existing sync group, if you choose either the recall namespace first option or recall namespace only option for initial download mode, files will show up as tiered until they're downloaded locally. To avoid this, select the avoid tiered files option for initial download mode. To manually recall files, use the [Invoke-StorageSyncFileRecall](../file-sync/file-sync-how-to-manage-tiered-files.md#how-to-recall-a-tiered-file-to-disk) cmdlet.
+ - When adding a new server endpoint to an existing sync group, if you choose either the recall namespace first option or recall namespace only option for initial download mode, files will show up as tiered until they're downloaded locally. To avoid this, select the **avoid tiered files** option for initial download mode. To manually recall files, use the [`Invoke-StorageSyncFileRecall`](../file-sync/file-sync-how-to-manage-tiered-files.md#how-to-recall-a-tiered-file-to-disk) cmdlet.
- If cloud tiering was enabled on the server endpoint and then disabled, files will remain tiered until they're accessed.
* <a id="afs-tiered-files-out-of-endpoint"></a> **Why do tiered files exist outside of the server endpoint namespace?**
- Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered files, and moves of tiered to other volumes were unaffected. The reason for this behavior was the implicit assumption that File Explorer and other Windows APIs have that move operations on the same volume are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the data from the cloud. Starting with [Azure File Sync agent version 3.0.12.0](../file-sync/file-sync-release-notes.md#supported-versions), Azure File Sync will allow you to move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in the background. This means that moves on the same volume are instantaneous, and we do all the work to recall the file to disk after the move is complete.
+ Prior to Azure File Sync agent version 3, Azure File Sync blocked the move of tiered files outside the server endpoint but on the same volume as the server endpoint. Copy operations, moves of non-tiered files, and moves of tiered files to other volumes were unaffected. The reason for this behavior was the implicit assumption that File Explorer and other Windows APIs have that move operations on the same volume are (nearly) instantaneous rename operations. This means moves will make File Explorer or other move methods (such as command line or PowerShell) appear unresponsive while Azure File Sync recalls the data from the cloud. Starting with [Azure File Sync agent version 3.0.12.0](../file-sync/file-sync-release-notes.md#supported-versions), Azure File Sync will allow you to move a tiered file outside of the server endpoint. We avoid the negative effects previously mentioned by allowing the tiered file to exist as a tiered file outside of the server endpoint and then recalling the file in the background. This means that moves on the same volume are instantaneous, and we do all the work to recall the file to disk after the move is complete.
* <a id="afs-do-not-delete-server-endpoint"></a> **I'm having an issue with Azure File Sync on my server (sync, cloud tiering, etc.). Should I remove and recreate my server endpoint?**
* <a id="afs-ntfs-acls"></a> **Does Azure File Sync preserve directory/file level NTFS ACLs along with data stored in Azure Files?**
- As of February 24, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync group. Any changes on ACLs made to Azure Files will sync down via Azure file sync. When copying data to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes, timestamps and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools, such as AzCopy, it's important to use the latest version. Check the [file copy tools table](storage-files-migration-overview.md#file-copy-tools) to get an overview of Azure copy tools to ensure you can copy all of the important metadata of a file.
+ As of February 24, 2020, new and existing ACLs tiered by Azure file sync will be persisted in NTFS format, and ACL modifications made directly to the Azure file share will sync to all servers in the sync group. Any changes on ACLs made to Azure file shares will sync down via Azure File Sync. When copying data to Azure Files, make sure you use a copy tool that supports the necessary "fidelity" to copy attributes, timestamps, and ACLs into an Azure file share - either via SMB or REST. When using Azure copy tools such as AzCopy, it's important to use the latest version. Check the [file copy tools table](storage-files-migration-overview.md#file-copy-tools) to get an overview of Azure copy tools to ensure you can copy all of the important metadata of a file.
- If you've enabled Azure Backup on your file sync managed file shares, file ACLs can continue to be restored as part of the backup restore workflow. This works either for the entire share or individual files/directories.
+ If you've enabled Azure Backup on your Azure File Sync managed file shares, file ACLs can continue to be restored as part of the backup restore workflow. This works either for the entire share or individual files/directories.
- If you're using snapshots as part of the self-managed backup solution for file shares managed by file sync, your ACLs may not be restored properly to NTFS ACLs if the snapshots were taken before February 24, 2020. If this occurs, consider contacting Azure Support.
+ If you're using snapshots as part of the self-managed backup solution for file shares managed by Azure File Sync, your ACLs might not be restored properly to NTFS ACLs if the snapshots were taken before February 24, 2020. If this occurs, consider contacting Azure Support.
* <a id="afs-lastwritetime"></a> **Does Azure File Sync sync the LastWriteTime for directories?**
**How can I audit file access and changes in Azure Files?** There are two options that provide auditing functionality for Azure Files:
- - If users are accessing the Azure file share directly, [Azure Storage logs](../blobs/monitor-blob-storage.md?tabs=azure-powershell#analyzing-logs) can be used to track file changes and user access. These logs can be used for troubleshooting purposes and the requests are logged on a best-effort basis.
+ - If users are accessing the Azure file share directly, you can use [Azure Storage logs](../blobs/monitor-blob-storage.md?tabs=azure-powershell#analyzing-logs) to track file changes and user access for troubleshooting purposes. Requests are logged on a best-effort basis.
- If users are accessing the Azure file share via a Windows Server that has the Azure File Sync agent installed, use an [audit policy](/windows/security/threat-protection/auditing/apply-a-basic-audit-policy-on-a-file-or-folder) or third-party product to track file changes and user access on the Windows Server. * <a id="access-based-enumeration"></a>
* <a id="ad-file-mount-cname"></a> **Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication (AD DS or Azure AD DS)?**
- No, this scenario isn't supported. As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md).
+ No, this scenario isn't currently supported in single-forest AD environments. As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md).
* <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?**
* <a id="ad-support-subscription"></a> **Can I enable either Azure AD DS or on-premises AD DS authentication for Azure file shares using an Azure AD tenant that's different from the Azure file share's primary tenant?**
- No. Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant that resides in the same subscription as the file share. A subscription can only be associated with one Azure AD tenant. When using on-premises AD DS for authentication, [the AD DS credential must be synced to the Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) that the storage account is associated with.
+ No. Azure Files only supports Azure AD DS or on-premises AD DS integration with an Azure AD tenant that resides in the same subscription as the file share. A subscription can only be associated with one Azure AD tenant. When using on-premises AD DS for authentication, [the AD DS credential should be synced to the Azure AD](../../active-directory/hybrid/how-to-connect-install-roadmap.md) that the storage account is associated with.
* <a id="ad-multiple-forest"></a> **Does on-premises AD DS authentication for Azure file shares support integration with an AD DS environment using multiple forests?**
* <a id="ad-aad-smb-files"></a> **Is there any difference in creating a computer account or service logon account to represent my storage account in AD?**
- Creating either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or a [service logon account](/windows/win32/ad/about-service-logon-accounts) has no difference on how the authentication would work with Azure Files. You can make your own choice on how to represent a storage account as an identity in your AD environment. The default DomainAccountType set in `Join-AzStorageAccountForAuth` cmdlet is computer account. However, the password expiration age configured in your AD environment can be different for computer or service logon account and you need to take that into consideration for [Update the password of your storage account identity in AD](./storage-files-identity-ad-ds-update-password.md).
+ Creating either a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) (default) or a [service logon account](/windows/win32/ad/about-service-logon-accounts) has no difference on how authentication works with Azure Files. You can make your own choice on how to represent a storage account as an identity in your AD environment. The default DomainAccountType set in `Join-AzStorageAccountForAuth` cmdlet is computer account. However, the password expiration age configured in your AD environment can be different for computer or service logon account and you need to take that into consideration for [Update the password of your storage account identity in AD](./storage-files-identity-ad-ds-update-password.md).
* <a id="ad-support-rest-apis"></a> **How to remove cached credentials with storage account key and delete existing SMB connections before initializing new connection with Azure AD or AD credentials?**
- You can follow the two step process below to remove the saved credential associated with the storage account key and remove the SMB connection:
+ Follow the two step process below to remove the saved credential associated with the storage account key and remove the SMB connection:
- 1. Run the cmdlet below in Windows Cmd.exe to remove the credential. If you cannot find one, it means that you have not persisted the credential and can skip this step.
+ 1. Run the following command from a Windows command prompt to remove the credential. If you can't find one, it means that you haven't persisted the credential and can skip this step.
cmdkey /delete:Domain:target=storage-account-name.file.core.windows.net
- 2. Delete the existing connection to the file share. You can specify the mount path as either the mounted drive letter or the storage-account-name.file.core.windows.net path.
+ 2. Delete the existing connection to the file share. You can specify the mount path as either the mounted drive letter or the `storage-account-name.file.core.windows.net` path.
net use <drive-letter/share-path> /delete
## Interoperability with other services * <a id="cluster-witness"></a> **Can I use my Azure file share as a *File Share Witness* for my Windows Server Failover Cluster?**
- Currently, this configuration is not supported for an Azure file share. For more information about how to set this up for Azure Blob storage, see [Deploy a Cloud Witness for a Failover Cluster](/windows-server/failover-clustering/deploy-cloud-witness).
+ This configuration isn't currently supported for Azure Files. To learn how to set this up using Azure Blob storage, see [Deploy a Cloud Witness for a Failover Cluster](/windows-server/failover-clustering/deploy-cloud-witness).
## See also * [Troubleshoot Azure Files](files-troubleshoot.md)
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
New-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAcco
Get-AzStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName -ListKerbKey | where-object{$_.Keyname -contains "kerb1"} ```
-The cmdlets above should return the key value. Once you have the kerb1 key, create either a service account or computer account in AD under your OU, and use the key as the password for the AD identity.
+The cmdlets should return the key value. Once you have the kerb1 key, create either a [computer account](/powershell/module/activedirectory/new-adcomputer) or [service account](/powershell/module/activedirectory/new-adserviceaccount) in AD under your OU, and use the key as the password for the AD identity.
1. Set the SPN to **cifs/your-storage-account-name-here.file.core.windows.net** either in the AD GUI or by running the `Setspn` command from the Windows command line as administrator (remember to replace the example text with your storage account name and `<ADAccountName>` with your AD account name):
The cmdlets above should return the key value. Once you have the kerb1 key, crea
Setspn -S cifs/your-storage-account-name-here.file.core.windows.net <ADAccountName> ```
-2. Use PowerShell to set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges):
+2. Set the AD account password to the value of the kerb1 key (you must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges):
```powershell Set-ADAccountPassword -Identity servername$ -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "kerb1_key_value_here" -Force)
storage Storage Files Identity Ad Ds Mount File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md
Previously updated : 04/03/2023 Last updated : 04/07/2023 recommendations: false
If you run into issues, refer to [Unable to mount Azure file shares with AD cred
## Mount the file share from a non-domain-joined VM
-Non-domain-joined VMs can access Azure file shares if they have line-of-sight to the domain controllers. The user accessing the file share must have an identity and credentials in the AD domain.
+Non-domain-joined VMs or VMs that are joined to a different AD domain than the storage account can access Azure file shares if they have line-of-sight to the domain controllers and provide explicit credentials. The user accessing the file share must have an identity and credentials in the AD domain that the storage account is joined to.
To mount a file share from a non-domain-joined VM, use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name. This will allow the client to contact the domain controller to request and receive Kerberos tickets. You can get the value of **domainFQDN** by running `(Get-ADDomain).Dnsroot` in Active Directory PowerShell.
storage Storage Snapshots Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md
Before you deploy the share snapshot scheduler, carefully consider your share sn
Share snapshots provide only file-level protection. Share snapshots don't prevent fat-finger deletions on a file share or storage account. To help protect a storage account from accidental deletions, you can either [enable soft delete](storage-files-prevent-file-share-deletion.md), or lock the storage account and/or the resource group.
+## Delete multiple snapshots
+
+Use the following PowerShell script to delete multiple file share snapshots. Be sure to replace **storageaccount_name**, **resource-GROUP**, and **sharename** with your own values.
+
+```powerShell
+$storageAccount = "storageaccount_name" 
+$RG = "resource-GROUP" $sharename = "sharename"
+$sa = get-azstorageaccount -Name $storageAccount -ResourceGroupName $RG $items = "","","" 
+ForEach ($item in $items)
+{
+    $snapshotTime = "$item"
+    $snap = Get-AzStorageShare -Name $sharename -SnapshotTime "$snapshotTime" -Context $sa.Context
+    $lease = [Azure.Storage.Files.Shares.Specialized.ShareLeaseClient]::new($snap.ShareClient)
+    $l
+}
+```
## Next steps - Working with share snapshots in: - [Azure file share backup](../../backup/azure-file-share-backup-overview.md) - [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare) - [Azure CLI](/cli/azure/storage/share#az-storage-share-snapshot) - [Windows](storage-how-to-use-files-windows.md#accessing-share-snapshots-from-windows)
- - [Share snapshot FAQ](storage-files-faq.md#share-snapshots)
+ - [Share snapshot FAQ](storage-files-faq.md#share-snapshots)
stream-analytics Geospatial Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/geospatial-scenarios.md
SELECT count(*) as NumberOfRequests, RegionsRefDataInput.RegionName
FROM UserRequestStreamDataInput JOIN RegionsRefDataInput ON st_within(UserRequestStreamDataInput.FromLocation, RegionsRefDataInput.Geofence) = 1
-GROUP BY RegionsRefDataInput.RegionName, hoppingwindow(minute, 1, 15)
+GROUP BY RegionsRefDataInput.RegionName, hoppingwindow(minute, 15, 1)
``` This query outputs a count of requests every minute for the last 15 minutes by each region within the city. This information can be displayed easily by Power BI dashboard, or can be broadcasted to all drivers as SMS text messages through integration with services like Azure functions.
The image below illustrates the output of the query to Power BI dashboard.
## Next steps * [Introduction to Stream Analytics geospatial functions](stream-analytics-geospatial-functions.md)
-* [GeoSpatial Functions (Azure Stream Analytics)](/stream-analytics-query/geospatial-functions)
+* [GeoSpatial Functions (Azure Stream Analytics)](/stream-analytics-query/geospatial-functions)
stream-analytics Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/private-endpoints.md
Previously updated : 05/20/2021 Last updated : 04/13/2023 # Create and delete managed private endpoints in an Azure Stream Analytics cluster
Once you approve the connection, any job running in your Stream Analytics cluste
* Azure IoT Hubs * Azure Service Bus * Azure Synapse Analytics - Dedicated SQL pool
+* Azure Data Explorer (kusto)
## Create managed private endpoint in Stream Analytics cluster
synapse-analytics Migrate To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/migrate-to-synapse-analytics-guide.md
Previously updated : 05/24/2022 Last updated : 04/12/2023 # Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics
-The following sections provide an overview of what's involved with migrating an existing data warehouse solution to an Azure Synapse Analytics dedicated SQL pool.
+The following sections provide an overview of what's involved with migrating an existing data warehouse solution to an Azure Synapse Analytics dedicated SQL pool (formerly SQL data warehouse).
## Overview
Performing a successful migration requires you to migrate your table schemas, co
## More resources
-The Customer Advisory Team has some great Azure Synapse Analytics (formerly Azure SQL Data Warehouse) guidance published as blog posts. For more information on migration, see [Migrating data to Azure SQL Data Warehouse in practice](/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice).
- For more information specifically about migrations from Netezza or Teradata to Azure Synapse Analytics, start at the first step of a seven-article sequence on migrations: - [Netezza to Azure Synapse Analytics migrations](netezz)
For more assistance with completing this migration scenario, see the following r
| Title/link | Description | | | | | [Data Workload Assessment Model and Tool](https://www.microsoft.com/download/details.aspx?id=103130) | This tool provides suggested "best fit" target platforms, cloud readiness, and application or database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing an automated and uniform target platform decision process. |
-| [Handling data encoding issues while loading data to Azure Synapse Analytics](https://azure.microsoft.com/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog post provides insight on some of the data encoding issues you might encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. |
-| [Getting table sizes in Azure Synapse Analytics dedicated SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform is to get metrics about a new environment post-migration. Examples include collecting load times from on-premises to the cloud and collecting PolyBase load times. One of the most important tasks is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. |
-
+| [Handling data encoding issues while loading data to Azure Synapse Analytics](https://azure.microsoft.com/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog post provides insight on some of the data encoding issues you might encounter while using PolyBase to load data to dedicated SQL pools (formerly SQL data warehouse). This article also provides some options that you can use to overcome such issues and load the data successfully. |
+| [Getting table sizes in Azure Synapse Analytics dedicated SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform is to get metrics about a new environment post-migration. Examples include collecting load times from on-premises to the cloud and collecting PolyBase load times. One of the most important tasks is to determine the storage size indedicated SQL pools (formerly SQL data warehouse) compared to the customer's current platform. |
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
The Data SQL Engineering team developed these resources. This team's core charte
Watch how [Walgreens migrated its retail inventory system](https://www.youtube.com/watch?v=86dhd8N1lH4) with about 100 TB of data from Netezza to Azure Synapse Analytics in record time. > [!TIP]
-> For more information on Synapse migrations, see [Azure Synapse Analytics migration guides](index.yml).
+> For more information on Synapse migrations, see [Azure Synapse Analytics migration guides](index.yml).
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
To create your data warehouse solution, you can choose from different kinds of i
| Partner | Description | Website/Product link | | - | -- | -- | | ![AtScale](./media/business-intelligence/atscale-logo.png) |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[Product page](https://www.atscale.com/partners/microsoft/)<br> |
-| ![Birst](./media/business-intelligence/birst_logo.png) |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Product page](https://www.birst.com/)<br> |
+| ![Birst](./media/business-intelligence/birst_logo.png) |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Product page](https://www.infor.com/solutions/advanced-analytics/business-intelligence/birst)<br> |
| ![Count](./media/business-intelligence/count-logo.png) |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few clicks. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Product page](https://count.co/)<br>| | ![Dremio](./media/business-intelligence/dremio-logo.png) |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Product page](https://www.dremio.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> | | ![Dundas](./media/business-intelligence/dundas_software_logo.png) |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Product page](https://www.dundas.com/dundas-bi)<br> |
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/intellij-tool-synapse.md
You can follow the instructions below to set up your local run and local debug f
- Environment variables and WinUtils.exe Location are only for windows users. - Environment variables: The system environment variable can be auto detected if you have set it before and no need to manually add.
- - [WinUtils.exe Location](http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe): You can specify the WinUtils location by selecting the folder icon on the right.
+ - [WinUtils.exe Location](https://github.com/steveloughran/winutils/releases/download/tag_2017-08-29-hadoop-2.8.1-native/hadoop-2.8.1.zip): You can specify the WinUtils location by selecting the folder icon on the right.
2. Then select the local play button.
You may want to see the script result by sending some code to the local console
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)-- [Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md)
+- [Create a new Apache Spark pool for an Azure Synapse Analytics workspace](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md)
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
After the run finished, you will see a snapshot link named '**View notebook run:
### Exit a notebook Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline. -- When you call an `exit()` function a notebook interactively, Azure Synapse will throw an exception, skip running subsequence cells, and keep Spark session alive.
+- When you call an *exit()* function from a notebook interactively, Azure Synapse will throw an exception, skip running subsequence cells, and keep the Spark session alive.
- When you orchestrate a notebook that calls an `exit()` function in a Synapse pipeline, Azure Synapse will return an exit value, complete the pipeline run, and stop the Spark session.
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
To schedule recurring updates on a single VM, follow these steps:
> Currently, VMs and maintenance configuration in the same subscription are supported. 1. In the **Basics** page, select **Subscription**, **Resource Group** and all options in **Instance details**.
+ - Select the **Maintenance scope** as *Guest (Azure VM, Arc-enabled VMs/servers)*.
- Select **Add a schedule** and in **Add/Modify schedule**, specify the schedule details such as: - Start on
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
Users must be assigned to a personal desktop to start their session. There are t
Automatic assignment is the default assignment type for new personal desktop host pools created in your Azure Virtual Desktop environment. Automatically assigning users doesn't require a specific session host.
-To automatically assign users, first assign them to the personal desktop host pool so that they can see the desktop in their feed. When an assigned user launches the desktop in their feed, their user session will be load-balanced to an available session host if they haven't already connected to the host pool.
+To automatically assign users, first assign them to the personal desktop host pool so that they can see the desktop in their feed. When an assigned user launches the desktop in their feed, their user session will be load-balanced to an available session host if they haven't already connected to the host pool. You can still [assign a user directly to a session host](#configure-direct-assignment) before they connect, even if the assignment type is set automatic.
To configure a host pool to automatically assign users to VMs, run the following PowerShell cmdlet:
virtual-desktop Enable Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/enable-gpu-acceleration.md
Follow the instructions in this article to create a GPU optimized Azure virtual
## Select an appropriate GPU-optimized Azure virtual machine size
-Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md) or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Consider NV series VM retirement when selecting VM, details on [NV retirement](../virtual-machines/nv-series-retirement.md)
+Select one of Azure's [NV-series](../virtual-machines/nv-series.md), [NVv3-series](../virtual-machines/nvv3-series.md), [NVv4-series](../virtual-machines/nvv4-series.md), [NVadsA10 v5-series](../virtual-machines/nva10v5-series.md), or [NCasT4_v3-series](../virtual-machines/nct4-v3-series.md) VM sizes to use as a session host. These are tailored for app and desktop virtualization and enable most apps and the Windows user interface to be GPU accelerated. The right choice for your host pool depends on a number of factors, including your particular app workloads, desired quality of user experience, and cost. In general, larger and more capable GPUs offer a better user experience at a given user density, while smaller and fractional-GPU sizes allow more fine-grained control over cost and quality. Note that NV-series VMs are planned to be retired. For more information, see [NV retirement](../virtual-machines/nv-series-retirement.md).
>[!NOTE] >Azure's NC, NCv2, NCv3, ND, and NDv2 series VMs are generally not appropriate for Azure Virtual Desktop session hosts. These VMs are tailored for specialized, high-performance compute or machine learning tools, such as those built with NVIDIA CUDA. They do not support GPU acceleration for most apps or the Windows user interface.
virtual-desktop Troubleshoot Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md
To resolve this issue, check that you can reach the two endpoints referred to as
On your session host VM, go to **Event Viewer** > **Windows Logs** > **Application**. If you see an event with ID 3703 with **RD Gateway Url: is not accessible** in the description, the agent is unable to reach the gateway URLs. To successfully connect to your session host, you must allow network traffic to the URLs from the [Required URL List](safe-url-list.md). Also, make sure your firewall or proxy settings don't block these URLs. Unblocking these URLs is required to use Azure Virtual Desktop.
-To resolve this issue, verify that your firewall and/or DNS settings are not blocking these URLs:
-1. [Use Azure Firewall to protect Azure Virtual Desktop deployments.](../firewall/protect-azure-virtual-desktop.md).
-1. Configure your [Azure Firewall DNS settings](../firewall/dns-settings.md).
+To resolve this issue, verify access these to the required URLs by running the [Required URL Check tool](required-url-check-tool.md). If you're using Azure Firewall, see [Use Azure Firewall to protect Azure Virtual Desktop deployments.](../firewall/protect-azure-virtual-desktop.md) and [Azure Firewall DNS settings](../firewall/dns-settings.md) for more information on how to configure it for Azure Virtual Desktop.
## Error: 3019
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Windows 10 and 11 22H2 Enterprise and Enterprise multi-session images are now vi
### Uniform Resource Identifier Schemes in public preview
-Uniform Resource Identifier (URI) schemes with the Remote Desktop client for Azure Virtual Desktop is now in public preview. This new feature lets you subscribe to a workspace or connect to a particular desktop or Remote App using URI schemes. URI schemes also provide fast and efficient end-user connection to Azure Virtual Desktop resources. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-uniform-resource-identifier/ba-p/3763075).
+Uniform Resource Identifier (URI) schemes with the Remote Desktop client for Azure Virtual Desktop is now in public preview. This new feature lets you subscribe to a workspace or connect to a particular desktop or Remote App using URI schemes. URI schemes also provide fast and efficient end-user connection to Azure Virtual Desktop resources. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-uniform-resource-identifier/ba-p/3763075) and [URI schemes with the Remote Desktop client for Azure Virtual Desktop (preview)](uri-scheme.md).
### Azure Virtual Desktop Insights at Scale now generally available
-Azure Virtual Desktop Insights at Scale is now generally available. This feature gives you the ability to review performance and diagnostic information in multiple host pools at the same time in a single view. If you're an existing Azure Virtual Desktop Insights user, you get this feature without having to do any extra configuration or setup. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-general-availability-of-azure-virtual-desktop/ba-p/3738624).
+Azure Virtual Desktop Insights at Scale is now generally available. This feature gives you the ability to review performance and diagnostic information in multiple host pools at the same time in a single view. If you're an existing Azure Virtual Desktop Insights user, you get this feature without having to do any extra configuration or setup. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-general-availability-of-azure-virtual-desktop/ba-p/3738624) and [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md).
## February 2023
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Title: Use Application Health extension with Azure Virtual Machine Scale Sets (preview)
+ Title: Use Application Health extension with Azure Virtual Machine Scale Sets
description: Learn how to use the Application Health extension to monitor the health of your applications deployed on Virtual Machine Scale Sets. Previously updated : 01/17/2023 Last updated : 04/12/2023 # Using Application Health extension with Virtual Machine Scale Sets
-> [!IMPORTANT]
-> **Rich Health States** is currently in public preview. **Binary Health States** is generally available.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md). This article describes how you can use the two types of Application Health extension, **Binary Health States** or **Rich Health States**, to monitor the health of your applications deployed on Virtual Machine Scale Sets.
The extension reports health from within a VM and can be used in situations wher
## Binary versus Rich Health States
-> [!IMPORTANT]
-> **Rich Health States** is currently in public preview.
- Application Health Extensions has two options available: **Binary Health States** and **Rich Health States**. The following table highlights some key differences between the two options. See the end of this section for general recommendations. | Features | Binary Health States | Rich Health States |
Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup `
Update-AzVmssInstance -ResourceGroupName $vmScaleSetResourceGroup ` -VMScaleSetName $vmScaleSetName ` -InstanceId '*'+ ``` # [Azure CLI 2.0](#tab/azure-cli)
The extension.json file content.
```json { "protocol": "<protocol>",
- "port": "<port>",
+ "port": <port>,
"requestPath": "</requestPath>" } ```
Add-AzVmssExtension -VirtualMachineScaleSet $vmScaleSet `
Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup ` -Name $vmScaleSetName ` -VirtualMachineScaleSet $vmScaleSet-
+
# Upgrade instances to install the extension Update-AzVmssInstance -ResourceGroupName $vmScaleSetResourceGroup ` -VMScaleSetName $vmScaleSetName ` -InstanceId '*'+ ``` # [Azure CLI 2.0](#tab/azure-cli)
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/compiling-scaling-applications.md
description: Learn how to scale HPC applications on Azure VMs.
Previously updated : 03/28/2023 Last updated : 04/11/2023
Optimal scale-up and scale-out performance of HPC applications on Azure requires performance tuning and optimization experiments for the specific workload. This section and the VM series-specific pages offer general guidance for scaling your applications. ## Application setup+ The [azurehpc repo](https://github.com/Azure/azurehpc) contains many examples of:+ - Setting up and running [applications](https://github.com/Azure/azurehpc/tree/master/apps) optimally. - Configuration of [file systems, and clusters](https://github.com/Azure/azurehpc/tree/master/examples). - [Tutorials](https://github.com/Azure/azurehpc/tree/master/tutorials) on how to get started easily with some common application workflows.
-## Optimally scaling MPI
+## Optimally scaling MPI
The following suggestions apply for optimal application scaling efficiency, performance, and consistency: -- For smaller scale jobs (< 256K connections) use:
- ```bash UCX_TLS=rc,sm ```
-- For larger scale jobs (> 256K connections) use:
- ```bash UCX_TLS=dc,sm ```
-- To calculate the number of connections for your MPI job, use:
- ```bash Max Connections = (processes per node) x (number of nodes per job) x (number of nodes per job) ```
-
+- For smaller scale jobs (< 256K connections) use: `UCX_TLS=rc,sm`
+
+- For larger scale jobs (> 256K connections) use: `UCX_TLS=dc,sm`
+
+- To calculate the number of connections for your MPI job, use: `Max Connections = (processes per node) x (number of nodes per job) x (number of nodes per job)`
+ ## Adaptive Routing+ Adaptive Routing (AR) allows Azure Virtual Machines (VMs) running EDR and HDR InfiniBand to automatically detect and avoid network congestion by dynamically selecting optimal network paths. As a result, AR offers improved latency and bandwidth on the InfiniBand network, which in turn drives higher performance and scaling efficiency. For more information, see [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/adaptive-routing-on-azure-hpc/ba-p/1205217). ## Process pinning
Adaptive Routing (AR) allows Azure Virtual Machines (VMs) running EDR and HDR In
- For pure MPI applications, experiment with between one to four MPI ranks per CCX for optimal performance on HB and HBv2 VM sizes. - Some applications with extreme sensitivity to memory bandwidth may benefit from using a reduced number of cores per CCX. For these applications, using three or two cores per CCX may reduce memory bandwidth contention and yield higher real-world performance or more consistent scalability. In particular, MPI 'Allreduce' may benefit from this approach. - For larger scale runs, it's recommended to use UD or hybrid RC+UD transports. Many MPI libraries/runtime libraries use these transports internally (such as UCX or MVAPICH2). Check your transport configurations for large-scale runs.
-
+ ## Compiling applications <br> <details>
Clang supports the `-march=znver1` flag to enable best code generation and tuni
### FLANG
-The FLANG compiler is a recent addition to the AOCC suite (added April 2018) and is currently in prerelease for developers to download and test. Based on Fortran 2008, AMD extends the GitHub version of FLANG (https://github.com/flang-compiler/flang). The FLANG compiler supports all Clang compiler options and other number of FLANG-specific compiler options.
+The FLANG compiler is a recent addition to the AOCC suite (added April 2018) and is currently in prerelease for developers to download and test. Based on Fortran 2008, AMD extends the GitHub version of [FLANG](https://github.com/flang-compiler/flang). The FLANG compiler supports all Clang compiler options and other number of FLANG-specific compiler options.
### DragonEgg
DragonEgg is a gcc plugin that replaces GCCΓÇÖs optimizers and code generators f
GFortran is the actual frontend for Fortran programs responsible for preprocessing, parsing, and semantic analysis generating the GCC GIMPLE intermediate representation (IR). DragonEgg is a GNU plugin, plugging into GFortran compilation flow. It implements the GNU plugin API. With the plugin architecture, DragonEgg becomes the compiler driver, driving the different phases of compilation. After following the download and installation instructions, Dragon Egg can be invoked using: ```bash
-$ gfortran [gFortran flags]
+gfortran [gFortran flags]
-fplugin=/path/AOCC-1.2-Compiler/AOCC-1.2- FortranPlugin/dragonegg.so [plugin optimization flags] -c xyz.f90 $ clang -O3 -lgfortran -o xyz xyz.o $./xyz ```+ ### PGI Compiler
-PGI Community Edition 17 is confirmed to work with AMD EPYC. A PGI-compiled version of STREAM does deliver full memory bandwidth of the platform. The newer Community Edition 18.10 (Nov 2018) should likewise work well. Use this CLI command to compile with the Intel Compiler:
+PGI Community Edition 17 is confirmed to work with AMD EPYC. A PGI-compiled version of STREAM does deliver full memory bandwidth of the platform. The newer Community Edition 18.10 (Nov 2018) should likewise work well. Use this CLI command to compile with the Intel Compiler:
```bash pgcc $(OPTIMIZATIONS_PGI) $(STACK) -DSTREAM_ARRAY_SIZE=800000000 stream.c -o stream.pgi ``` ### Intel Compiler+ Intel Compiler 18 is confirmed to work with AMD EPYC. Use this CLI command to compile with the Intel Compiler. ```bash icc -o stream.intel stream.c -DSTATIC -DSTREAM_ARRAY_SIZE=800000000 -mcmodel=large -shared-intel -Ofast ΓÇôqopenmp ```
-### GCC Compiler
+### GCC Compiler
+ For HPC workloads, AMD recommends GCC compiler 7.3 or newer. Older versions, such as 4.8.5 included with RHEL/CentOS 7.4, aren't recommended. GCC 7.3, and newer, delivers higher performance on HPL, HPCG, and DGEMM tests. ```bash gcc $(OPTIMIZATIONS) $(OMP) $(STACK) $(STREAM_PARAMETERS) stream.c -o stream.gcc ```+ </details> ## Next steps
gcc $(OPTIMIZATIONS) $(OMP) $(STACK) $(STREAM_PARAMETERS) stream.c -o stream.gcc
- Review the [HBv3-series overview](hbv3-series-overview.md) and [HC-series overview](hc-series-overview.md). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute). - Learn more about [HPC](/azure/architecture/topics/high-performance-computing/) on Azure.-
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
description: Learn about configuring and optimizing the InfiniBand enabled H-ser
Previously updated : 03/28/2023 Last updated : 04/11/2023
This article shares some guidance on configuring and optimizing the InfiniBand-enabled [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) VMs for HPC. ## VM images+ On InfiniBand (IB) enabled VMs, the appropriate drivers are required to enable RDMA.+ - The [CentOS-HPC VM images](#centos-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers.
- - The CentOS-HPC version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers.
+ - The CentOS-HPC version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers.
- The [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) in the Marketplace come preconfigured with the appropriate IB drivers and GPU drivers. These VM images are based on the base CentOS and Ubuntu marketplace VM images. Scripts used in the creation of these VM images from their base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos). On GPU enabled [N-series](sizes-gpu.md) VMs, the appropriate GPU drivers are additionally required. This can be available by the following methods:+ - Use the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and [CentOS-HPC VM image](#centos-hpc-vm-images) version 7.9 that come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). - Add the GPU drivers through the [VM extensions](./extensions/hpccompute-gpu-linux.md). - Install the GPU drivers [manually](./linux/n-series-driver-setup.md).
It's also recommended to create [custom VM images](./linux/tutorial-custom-image
### VM sizes supported by the HPC VM images #### InfiniBand OFED support+ The latest Azure HPC marketplace images come with Mellanox OFED 5.1 and above, which do not support ConnectX3-Pro InfiniBand cards. ConnectX-3 Pro InfiniBand cards require MOFED 4.9 LTS version. These VM images only support ConnextX-5 and newer InfiniBand cards. The following VM size support matrix for the InfiniBand OFED in these HPC VM images:+ - [HB-series](sizes-hpc.md): HB, HC, HBv2, HBv3, HBv4 - [N-series](sizes-gpu.md): NDv2, NDv4 #### GPU driver support+ Currently only the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and [CentOS-HPC VM images](#centos-hpc-vm-images) version 7.9 come preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). The VM size support matrix for the GPU drivers in supported HPC VM images is as follows:+ - [N-series](sizes-gpu.md): NDv2, NDv4 VM sizes are supported with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL). - The other 'NC' and 'ND' VM sizes in the [N-series](sizes-gpu.md) are supported with the NVIDIA GPU drivers. All of the VM sizes in the N-series support [Gen 2 VMs](generation-2.md), though some older ones also support Gen 1 VMs. Gen 2 support is also indicated with a "01" at the end of the image URN or version.
-### CentOS-HPC VM images
+### SR-IOV enabled VMs
+
+#### CentOS-HPC VM images
-#### SR-IOV enabled VMs
For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC VM images version 7.6 and later are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).+ - The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview).
- ```bash
+
+ ```output
"publisher": "OpenLogic", "offer": "CentOS-HPC", ```+ - Scripts used in the creation of the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos). - Additionally, details on what's included in the [Ubuntu-HPC VM images](#ubuntu-hpc-vm-images) and CentOS-HPC version 7.6 and later VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094).
-> [!NOTE]
+> [!NOTE]
> Among the CentOS-HPC VM images, currently only the version 7.9 VM image additionally comes preconfigured with the NVIDIA GPU drivers and GPU compute software stack (CUDA, NCCL).
-> [!NOTE]
+> [!NOTE]
> SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3 and older) will be able to use the following CentOS-HPC VM image or older versions from the Marketplace:+ >- OpenLogic:CentOS-HPC:7.6:7.6.2020062900 >- OpenLogic:CentOS-HPC:7_6gen2:7.6.2020062901 >- OpenLogic:CentOS-HPC:7.7:7.7.2020062600
For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), [Ubu
>- OpenLogic:CentOS-HPC:8_1:8.1.2020062400 >- OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401
-### Ubuntu-HPC VM images
+#### Ubuntu-HPC VM images
+ For SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances), Ubuntu-HPC VM images versions 18.04 and 20.04 are suitable. These VM images come preconfigured with the Mellanox OFED drivers for RDMA, NVIDIA GPU drivers, GPU compute software stack (CUDA, NCCL), and commonly used MPI libraries and scientific computing packages. Refer to the [VM size support matrix](#vm-sizes-supported-by-the-hpc-vm-images).+ - The available or latest versions of the VM images can be listed with the following information using [CLI](/cli/azure/vm/image#az-vm-image-list) or [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-hpc?tab=overview).
- ```bash
+
+ ```output
"publisher": "Microsoft-DSVM", "offer": "Ubuntu-HPC", ```+ - Scripts used in the creation of the Ubuntu-HPC VM images from a base Ubuntu Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/ubuntu). - Additionally, details on what's included in the Ubuntu-HPC VM images, and how to deploy them are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/azure-hpc-vm-images/ba-p/977094). ### RHEL/CentOS VM images+ The base RHEL or CentOS-based non-HPC VM images on the Marketplace can be configured for use on the SR-IOV enabled [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances). Learn more about [enabling InfiniBand](./extensions/enable-infiniband.md) and [setting up MPI](setup-mpi.md) on the VMs.+ - Scripts used in the creation of the CentOS-HPC version 7.6 and later VM images from a base CentOS Marketplace image from the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/centos) can also be used.
-
+ ### Ubuntu VM images+ The base Ubuntu Server 16.04 LTS, 18.04 LTS, and 20.04 LTS VM images in the Marketplace are supported for both SR-IOV and non-SR-IOV [RDMA capable VMs](sizes-hpc.md#rdma-capable-instances). Learn more about [enabling InfiniBand](./extensions/enable-infiniband.md) and [setting up MPI](setup-mpi.md) on the VMs.+ - Instructions for enabling InfiniBand on the Ubuntu VM images are in a [TechCommunity article](https://techcommunity.microsoft.com/t5/azure-compute/configuring-infiniband-for-ubuntu-hpc-and-gpu-vms/ba-p/1221351). - Scripts used in the creation of the Ubuntu 18.04 and 20.04 LTS based HPC VM images from a base Ubuntu Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/ubuntu).
The base Ubuntu Server 16.04 LTS, 18.04 LTS, and 20.04 LTS VM images in the Mark
> Mellanox OFED 5.1 and above don't support ConnectX3-Pro InfiniBand cards on SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3). Please use LTS Mellanox OFED version 4.9-0.1.7.0 or older on the N-series VM's with ConnectX3-Pro cards. For more information, see [Linux InfiniBand Drivers](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed). ### SUSE Linux Enterprise Server VM images+ SLES 12 SP3 for HPC, SLES 12 SP3 for HPC (Premium), SLES 12 SP1 for HPC, SLES 12 SP1 for HPC (Premium), SLES 12 SP4 and SLES 15 VM images in the Marketplace are supported. These VM images come preloaded with the Network Direct drivers for RDMA (on the non-SR-IOV VM sizes) and Intel MPI version 5.1. Learn more about [setting up MPI](setup-mpi.md) on the VMs. ## Optimize VMs
If necessary for functionality or performance, [Linux Integration Services (LIS)
wget https://aka.ms/lis tar xzf lis pushd LISISO
-./upgrade.sh
+sudo ./upgrade.sh
``` ### Reclaim memory
pushd LISISO
Improve performance by automatically reclaiming memory to avoid remote memory access. ```bash
-echo 1 >/proc/sys/vm/zone_reclaim_mode
+sudo echo 1 >/proc/sys/vm/zone_reclaim_mode
``` Keep reclaim memory mode persistent after VM reboots: ```bash
-echo "vm.zone_reclaim_mode = 1" >> /etc/sysctl.conf sysctl -p
+sudo echo "vm.zone_reclaim_mode = 1" >> /etc/sysctl.conf sysctl -p
``` ### Disable firewall and SELinux ```bash
-systemctl stop iptables.service
-systemctl disable iptables.service
-systemctl mask firewalld
-systemctl stop firewalld.service
-systemctl disable firewalld.service
-iptables -nL
-sed -i -e's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
+sudo systemctl stop iptables.service
+sudo systemctl disable iptables.service
+sudo systemctl mask firewalld
+sudo systemctl stop firewalld.service
+sudo systemctl disable firewalld.service
+sudo iptables -nL
+sudo sed -i -e's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
``` ### Disable cpupower ```bash
-service cpupower status
-if enabled, disable it:
-service cpupower stop
+sudo service cpupower status
+```
+
+If enabled, disable it:
+
+```bash
+sudo service cpupower stop
sudo systemctl disable cpupower ``` ### Configure WALinuxAgent ```bash
-sed -i -e 's/# OS.EnableRDMA=y/OS.EnableRDMA=y/g' /etc/waagent.conf
+sudo sed -i -e 's/# OS.EnableRDMA=y/OS.EnableRDMA=y/g' /etc/waagent.conf
```
-Optionally, the WALinuxAgent may be disabled before running a job then enabled post-job for maximum VM resource availability to the HPC workload.
+Optionally, the WALinuxAgent may be disabled before running a job then enabled post-job for maximum VM resource availability to the HPC workload.
## Next steps
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 01/25/2023 Last updated : 04/11/2023
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 02/22/2023 Last updated : 04/11/2023
With premium SSD, the disk IOPS and throughput is fixed, for example, IOPS of a
### Ultra Disk and Premium SSD v2 performance throttles
-Both Ultra Disks and Premium SSD v2 managed disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared Ultra Disks and shared Premium SSD v2 managed disks have two more attributes.
+Both Ultra Disks and Premium SSD v2 managed disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared Ultra Disks and shared Premium SSD v2 managed disks have two more attributes. Ultra Disks and Premium SSD v2 split these attributes across each attached VM. For some examples on how this distribution of capacity, IOPS, and throughput works, see the [Examples](#examples) section.
|Attribute |Description |
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Title: Azure Linux VM Agent overview
-description: Learn how to install and configure Azure Linux Agent (waagent) to manage your virtual machine's interaction with the Azure Fabric controller.
+description: Learn how to install and configure the Azure Linux VM Agent (waagent) to manage your virtual machine's interaction with the Azure fabric controller.
Last updated 03/28/2023
-# Understanding and using the Azure Linux Agent
+# Azure Linux VM Agent overview
-The Microsoft Azure Linux Agent (waagent) manages Linux and FreeBSD provisioning, and virtual machine (VM) interaction with the Azure Fabric controller. In addition to the Linux agent providing provisioning functionality, Azure also provides the option of using `cloud-init` for some Linux operating systems.
+The Microsoft Azure Linux VM Agent (waagent) manages Linux and FreeBSD provisioning, along with virtual machine (VM) interaction with the Azure fabric controller. In addition to the Linux agent providing provisioning functionality, Azure provides the option of using cloud-init for some Linux operating systems.
-
-The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see [Microsoft Azure Linux Agent](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
+The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see the [Azure Linux VM Agent readme on GitHub](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
### Image provisioning
The Linux agent provides the following functionality for Linux and FreeBSD Azure
- Deploys SSH public keys and key pairs - Sets the host name - Publishes the host name to the platform DNS-- Reports SSH host key fingerprint to the platform-- Manages resource disk
+- Reports the SSH host key fingerprint to the platform
+- Manages the resource disk
- Formats and mounts the resource disk - Configures swap space
The Linux agent provides the following functionality for Linux and FreeBSD Azure
### Kernel -- Configures virtual NUMA (disable for kernel <`2.6.37`)
+- Configures virtual NUMA (disabled for kernel 2.6.37)
- Consumes Hyper-V entropy for */dev/random* - Configures SCSI timeouts for the root device, which can be remote ### Diagnostics -- Console redirection to the serial port
+- Provides console redirection to the serial port
### System Center Virtual Machine Manager deployments -- Detects and bootstraps the Virtual Machine Manager agent for Linux when running in a System Center Virtual Machine Manager 2012 R2 environment
+- Detects and bootstraps the Virtual Machine Manager agent for Linux when it's running in a System Center Virtual Machine Manager 2012 R2 environment
### VM Extension -- Injects component authored by Microsoft and partners into Linux VMs to enable software and configuration automation-- VM Extension reference implementation on [https://github.com/Azure/azure-linux-extensions](https://github.com/Azure/azure-linux-extensions)
+- Injects components authored by Microsoft and partners into Linux VMs to enable software and configuration automation
+
+You can find a VM Extension reference implementation on [GitHub](https://github.com/Azure/azure-linux-extensions).
## Communication
-The information flow from the platform to the agent occurs by using two channels:
+Information flow from the platform to the agent occurs through two channels:
-- A boot-time attached DVD for VM deployments. This DVD includes an Open Virtualization Format (OVF)-compliant configuration file that includes all provisioning information other than the SSH key pairs.-- A TCP endpoint exposing a REST API used to obtain deployment and topology configuration.
+- A boot-time attached DVD for VM deployments. This DVD includes an Open Virtualization Format (OVF)-compliant configuration file that contains all provisioning information other than the SSH key pairs.
+- A TCP endpoint that exposes a REST API that's used to get deployment and topology configuration.
## Requirements
-The following systems have been tested and are known to work with the Azure Linux Agent:
+Testing has confirmed that the following systems work with the Azure Linux VM Agent.
> [!NOTE]
-> This list might differ from the [Endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
+> This list might differ from the [endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
| Distribution | x64 | ARM64 | |:--|:--:|:--:|
The following systems have been tested and are known to work with the Azure Linu
| CentOS | 7.x+, 8.x+ | 7.x+ | | Debian | 10+ | 11.x+ | | Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
-| openSUSE | 12.3+ | **Not Supported** |
-| Oracle Linux | 6.4+, 7.x+, 8.x+ | **Not Supported** |
+| openSUSE | 12.3+ | *Not supported* |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | *Not supported* |
| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+ | 8.6+, 9.0+ | | Rocky Linux | 9.x+ | 9.x+ | | SLES | 12.x+, 15.x+ | 15.x SP4+ | | Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ | > [!IMPORTANT]
-> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with ELS support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata).
+> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with Extended Lifecycle Support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata).
-Other Supported Systems:
+Other supported systems:
-- FreeBSD 10+ (Azure Linux Agent v2.0.10+)
+- FreeBSD 10+ (Azure Linux VM Agent v2.0.10+)
-The Linux agent depends on some system packages in order to function properly:
+The Linux agent depends on these system packages to function properly:
- Python 2.6+ - OpenSSL 1.0+
The Linux agent depends on some system packages in order to function properly:
- Password tools: chpasswd, sudo - Text processing tools: sed, grep - Network tools: ip-route-- Kernel support for mounting UDF file systems.
+- Kernel support for mounting UDF file systems
-Ensure your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+Ensure that your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16?](../../virtual-network/what-is-ip-address-168-63-129-16.md).
## Installation
-The preferred method of installing and upgrading the Azure Linux Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories.
+The preferred method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
-For advanced installation options, such as installing from source or to custom locations or prefixes, see [Microsoft Azure Linux Agent](https://github.com/Azure/WALinuxAgent).
+For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent).
## Command-line options ### Flags -- `verbose`: Increase verbosity of specified command-- `force`: Skip interactive confirmation for some commands
+- `verbose`: Increases verbosity of the specified command.
+- `force`: Skips interactive confirmation for some commands.
### Commands -- `help`: Lists the supported commands and flags-- `deprovision`: Attempt to clean the system and make it suitable for reprovisioning. The operation deletes:
- - All SSH host keys, if `Provisioning.RegenerateSshHostKeyPair` is `y` in the configuration file
- - Nameserver configuration in */etc/resolv.conf*
- - Root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file
- - Cached DHCP client leases
- - Resets host name to `localhost.localdomain`
+- `help`: Lists the supported commands and flags.
+- `deprovision`: Attempts to clean the system and make it suitable for reprovisioning. The operation deletes:
+ - All SSH host keys, if `Provisioning.RegenerateSshHostKeyPair` is `y` in the configuration file.
+ - `Nameserver` configuration in */etc/resolv.conf*.
+ - The root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file.
+ - Cached DHCP client leases.
+
+ The client resets the host name to `localhost.localdomain`.
> [!WARNING] > Deprovisioning doesn't guarantee that the image is cleared of all sensitive information and suitable for redistribution. -- `deprovision+user`: Performs everything in `deprovision` (previous) and also deletes the last provisioned user account, obtained from */var/lib/waagent*, and associated data. Use this parameter when you deprovision an image that was previously provisioned on Azure so that it can be captured and reused.
+- `deprovision+user`: Performs everything in `deprovision` and deletes the last provisioned user account (obtained from */var/lib/waagent*) and associated data. Use this parameter when you deprovision an image that was previously provisioned on Azure so that it can be captured and reused.
- `version`: Displays the version of waagent. - `serialconsole`: Configures GRUB to mark ttyS0, the first serial port, as the boot console. This option ensures that kernel boot logs are sent to the serial port and made available for debugging.-- `daemon`: Run waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent *init* script.-- `start`: Run waagent as a background process.
+- `daemon`: Runs waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent *init* script.
+- `start`: Runs waagent as a background process.
## Configuration
-The */etc/waagent.conf* configuration file controls the actions of waagent. This example is a sample configuration file:
+The */etc/waagent.conf* configuration file controls the actions of waagent. Here's an example of a configuration file:
```config Provisioning.Enabled=y
HttpProxy.Port=None
AutoUpdate.Enabled=y ```
-Configuration options are of three types: `Boolean`, `String`, or `Integer`. The `Boolean` configuration options can be specified as `y` or `n`. The special keyword `None` might be used for some string type configuration entries.
+Configuration options are of three types: `Boolean`, `String`, or `Integer`. You can specify the `Boolean` configuration options as `y` or `n`. The special keyword `None` might be used for some string type configuration entries.
### Provisioning.Enabled
Type: Boolean
Default: n ```
-If `y`, the agent erases the root password in the */etc/shadow* file during the provisioning process.
+If the value is `y`, the agent erases the root password in the */etc/shadow* file during the provisioning process.
### Provisioning.RegenerateSshHostKeyPair
Type: Boolean
Default: y ```
-If `y`, the agent deletes all SSH host key pairs from */etc/ssh/* during the provisioning process, including ECDSA, DSA, and RSA. The agent generates a single fresh key pair.
+If the value is `y`, the agent deletes all SSH host key pairs from */etc/ssh/* during the provisioning process, including ECDSA, DSA, and RSA. The agent generates a single fresh key pair.
-Configure the encryption type for the fresh key pair by using the `Provisioning.SshHostKeyPairType` entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted, for example, upon a reboot.
+Configure the encryption type for the fresh key pair by using the `Provisioning.SshHostKeyPairType` entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted--for example, after a reboot.
### Provisioning.SshHostKeyPairType
Type: String
Default: rsa ```
-This option can be set to an encryption algorithm type that the SSH daemon supports on the VM. The typically supported values are `rsa`, `dsa`, and `ecdsa`. *putty.exe* on Windows doesn't support `ecdsa`. If you intend to use *putty.exe* on Windows to connect to a Linux deployment, use `rsa` or `dsa`.
+You can set this option to an encryption algorithm type that the SSH daemon supports on the VM. The typically supported values are `rsa`, `dsa`, and `ecdsa`. The *putty.exe* file on Windows doesn't support `ecdsa`. If you intend to use *putty.exe* on Windows to connect to a Linux deployment, use `rsa` or `dsa`.
### Provisioning.MonitorHostName
Type: Boolean
Default: y ```
-If `y`, waagent monitors the Linux VM for a host name change, as returned by the `hostname` command, and automatically updates the networking configuration in the image to reflect the change. In order to push the name change to the DNS servers, networking restarts on the VM. This restart results in brief loss of internet connectivity.
+If the value is `y`, waagent monitors the Linux VM for a host name change, as returned by the `hostname` command. Waagent then automatically updates the networking configuration in the image to reflect the change. To push the name change to the DNS servers, networking restarts on the VM. This restart results in brief loss of internet connectivity.
### Provisioning.DecodeCustomData
Type: Boolean
Default: n ```
-If `y`, waagent decodes `CustomData` from Base64.
+If the value is `y`, waagent decodes `CustomData` from Base64.
### Provisioning.ExecuteCustomData
Type: Boolean
Default: n ```
-If `y`, waagent runs `CustomData` after provisioning.
+If the value is `y`, waagent runs `CustomData` after provisioning.
### Provisioning.AllowResetSysUser
Type: Boolean
Default: n ```
-This option allows the password for the system user to be reset. The default is disabled.
+This option allows the password for the system user to be reset. It's disabled by default.
### Provisioning.PasswordCryptId
Type: String
Default: 6 ```
-This option specifies the algorithm used by crypt when generating password hash. Valid values are:
+This option specifies the algorithm that `crypt` uses when it's generating a password hash. Valid values are:
-- 1: MD5 -- 2a: - Blowfish -- 5: SHA-256 -- 6: SHA-512
+- `1`: MD5
+- `2a`: Blowfish
+- `5`: SHA-256
+- `6`: SHA-512
### Provisioning.PasswordCryptSaltLength
Type: String
Default: 10 ```
-This option specifies the length of random salt used when generating password hash.
+This option specifies the length of random salt used in generating a password hash.
### ResourceDisk.Format
Type: Boolean
Default: y ```
-If `y`, waagent formats and mounts the resource disk provided by the platform, unless the file system type requested by the user in `ResourceDisk.Filesystem` is `ntfs`. The agent makes a single Linux partition (ID 83) available on the disk. This partition isn't formatted if it can be successfully mounted.
+If the value is `y`, waagent formats and mounts the resource disk that the platform provides, unless the file system type that the user requested in `ResourceDisk.Filesystem` is `ntfs`. The agent makes a single Linux partition (ID 83) available on the disk. This partition isn't formatted if it can be successfully mounted.
### ResourceDisk.Filesystem
Type: String
Default: /mnt/resource ```
-This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk, and might be emptied when the VM is deprovisioned.
+This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned.
### ResourceDisk.MountOptions
Type: String
Default: None ```
-Specifies disk mount options to be passed to the `mount -o` command. This value is a comma-separated list of values, for example, `nodev,nosuid`. For more information, see the mount(8) manual page.
+This option specifies disk mount options to be passed to the `mount -o` command. The value is a comma-separated list of values, for example, `nodev,nosuid`. For more information, see the `mount(8)` manual page.
### ResourceDisk.EnableSwap
Type: Boolean
Default: n ```
-If set, the agent creates a swap file, */swapfile*, on the resource disk and adds it to the system swap space.
+If you set this option, the agent creates a swap file (*/swapfile*) on the resource disk and adds it to the system swap space.
### ResourceDisk.SwapSizeMB
Type: Integer
Default: 0 ```
-Specifies the size of the swap file in megabytes.
+This option specifies the size of the swap file in megabytes.
### Logs.Verbose
Type: Boolean
Default: n ```
-If set, log verbosity is boosted. Waagent logs to */var/log/waagent.log* and uses the system `logrotate` functionality to rotate logs.
+If you set this option, log verbosity is boosted. Waagent logs to */var/log/waagent.log* and uses the system `logrotate` functionality to rotate logs.
### OS.EnableRDMA
Type: Boolean
Default: n ```
-If set, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware.
+If you set this option, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware.
### OS.RootDeviceScsiTimeout
Type: Integer
Default: 300 ```
-This setting configures the SCSI timeout in seconds on the OS disk and data drives. If not set, the system defaults are used.
+This option configures the SCSI timeout in seconds on the OS disk and data drives. If it's not set, the system defaults are used.
### OS.OpensslPath
Type: String
Default: None ```
-This setting can be used to specify an alternate path for the *openssl* binary to use for cryptographic operations.
+You can use this option to specify an alternate path for the *openssl* binary to use for cryptographic operations.
### HttpProxy.Host, HttpProxy.Port
Type: String
Default: None ```
-If set, the agent uses this proxy server to access the internet.
+If you set this option, the agent uses this proxy server to access the internet.
### AutoUpdate.Enabled
Default: y
Enable or disable autoupdate for goal state processing. The default value is `y`.
-## Linux guest agent automatic logs collection
+## Automatic log collection in the Azure Linux Guest Agent
-As of version 2.7+, The Azure Linux guest agent has a feature to automatically collect some logs and upload them. This feature currently requires `systemd`, and uses a new `systemd` slice called `azure-walinuxagent-logcollector.slice` to manage resources while it performs the collection.
+As of version 2.7+, the Azure Linux Guest Agent has a feature to automatically collect some logs and upload them. This feature currently requires `systemd`. It uses a new `systemd` slice called `azure-walinuxagent-logcollector.slice` to manage resources while it performs the collection.
-The purpose is to facilitate offline analysis. The agent produces a *.zip* file of some diagnostics logs before uploading them to the VM's host. Engineering teams and support professionals can retrieve the file to investigate issues for the VM owner. More technical information on the files collected by the guest agent can be found in the *azurelinuxagent/common/logcollector_manifests.py* file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
+The purpose is to facilitate offline analysis. The agent produces a *.zip* file of some diagnostics logs before uploading them to the VM's host. Engineering teams and support professionals can retrieve the file to investigate issues for the VM owner. For technical information on the files that the Azure Linux Guest Agent collects, see the *azurelinuxagent/common/logcollector_manifests.py* file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
-This option can be disabled by editing */etc/waagent.conf*. Update `Logs.Collect` to `n`.
+You can disable this option by editing */etc/waagent.conf*. Update `Logs.Collect` to `n`.
## Ubuntu Cloud Images
-Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to do many configuration tasks that the Azure Linux Agent would otherwise manage. The following differences apply:
+Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to do many configuration tasks that the Azure Linux VM Agent would otherwise manage. The following differences apply:
- `Provisioning.Enabled` defaults to `n` on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks. - The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:
Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-
- `ResourceDisk.EnableSwap` - `ResourceDisk.SwapSizeMB` -- For more information, see the following resources to configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning:
+To configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning, see the following resources:
- - [Ubuntu Wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)
- - [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)
+- [Ubuntu wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)
+- [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)
virtual-machines Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-windows.md
Title: Azure Virtual Machine Agent Overview
-description: Azure Virtual Machine Agent Overview
+ Title: Azure Windows VM Agent overview
+description: Learn how to install and detect the Azure Windows VM Agent to manage your virtual machine's interaction with the Azure fabric controller.
Last updated 02/27/2023
-# Azure Virtual Machine Agent overview
-The Microsoft Azure Virtual Machine Agent (VM Agent) is a secure, lightweight process that manages virtual machine (VM) interaction with the Azure Fabric Controller. The VM Agent has a primary role in enabling and executing Azure virtual machine extensions. VM Extensions enable post-deployment configuration of VM, such as installing and configuring software. VM extensions also enable recovery features such as resetting the administrative password of a VM. Without the Azure VM Agent, VM extensions cannot be run.
+# Azure Windows VM Agent overview
-This article details installation and detection of the Azure Virtual Machine Agent.
+The Microsoft Azure Windows VM Agent is a secure, lightweight process that manages virtual machine (VM) interaction with the Azure fabric controller. The Azure Windows VM Agent has a primary role in enabling and executing Azure virtual machine extensions. VM extensions enable post-deployment configuration of VMs, such as installing and configuring software. VM extensions also enable recovery features such as resetting the administrative password of a VM. Without the Azure Windows VM Agent, you can't run VM extensions.
-## Prerequisites
+This article describes how to install and detect the Azure Windows VM Agent.
-### **Windows OSΓÇÖ Supported**
-| **Windows OS** | **x64** |
-|:-|:-:|
-| Windows 10 | Supported |
-| Windows 11 | Supported |
-| Windows Server 2008 SP2 | Supported |
-| Windows Server 2008 R2 | Supported |
-| Windows Server 2012 | Supported |
-| Windows Server 2012 R2 | Supported |
-| Windows Server 2016 | Supported |
-| Windows Server 2016 Core | Supported |
-| Windows Server 2019 | Supported |
-| Windows Server 2019 Core | Supported |
-| Windows Server 2022 | Supported |
-| Windows Server 2022 Core | Supported |
+## Prerequisites
+The Azure Windows VM Agent supports the x64 architecture for these Windows operating systems:
+
+- Windows 10
+- Windows 11
+- Windows Server 2008 SP2
+- Windows Server 2008 R2
+- Windows Server 2012
+- Windows Server 2012 R2
+- Windows Server 2016
+- Windows Server 2016 Core
+- Windows Server 2019
+- Windows Server 2019 Core
+- Windows Server 2022
+- Windows Server 2022 Core
> [!IMPORTANT]
-> - The Windows VM Agent needs at least Windows Server 2008 SP2 (64-bit) to run, with the .NET Framework 4.0. See [Minimum version support for virtual machine agents in Azure](https://support.microsoft.com/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support).
+> - The Azure Windows VM Agent needs at least Windows Server 2008 SP2 (64-bit) to run, with the .NET Framework 4.0. See [Minimum version support for virtual machine agents in Azure](https://support.microsoft.com/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support).
>
-> - Ensure your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+> - Ensure that your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16?](../../virtual-network/what-is-ip-address-168-63-129-16.md).
>
-> - Ensure that DHCP is enabled inside the guest VM. This is required to get the host or fabric address from DHCP for the IaaS VM Agent and extensions to work. If you need a static private IP, you should configure it through the Azure portal or PowerShell, and make sure the DHCP option inside the VM is enabled. [Learn more](../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md) about setting up a static IP address with PowerShell.
+> - Ensure that DHCP is enabled inside the guest VM. This is required to get the host or fabric address from DHCP for the Azure Windows VM Agent and extensions to work. If you need a static private IP address, you should configure it through the Azure portal or PowerShell, and make sure the DHCP option inside the VM is enabled. [Learn more](../../virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md) about setting up a static IP address by using PowerShell.
>
-> - Running the VM Agent in a "Nested Virtualization" VM might lead to unpredictable behavior, hence it's not supported in that Dev/Test scenario.
+> - Running the Azure Windows VM Agent in a nested virtualization VM might lead to unpredictable behavior, so it's not supported in that dev/test scenario.
-## Install the VM Agent
+## Install the Azure Windows VM Agent
### Azure Marketplace image
-The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image. When you deploy an Azure Marketplace image from the portal, PowerShell, Command Line Interface, or an Azure Resource Manager template, the Azure VM Agent is also installed.
+The Azure Windows VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image. When you deploy an Azure Marketplace image from the Azure portal, PowerShell, the Azure CLI, or an Azure Resource Manager template, the Azure Windows VM Agent is also installed.
-The Windows Guest Agent Package is broken into two parts:
+The Azure Windows VM Agent package has two parts:
-- Provisioning Agent (PA)-- Windows Guest Agent (WinGA)
+- Azure Windows Provisioning Agent (PA)
+- Azure Windows Guest Agent (WinGA)
-To boot a VM you must have the PA installed on the VM, however the WinGA does not need to be installed. At VM deploy time, you can select not to install the WinGA. The following example shows how to select the *provisionVmAgent* option with an Azure Resource Manager template:
+To boot a VM, you must have the PA installed on the VM. However, the WinGA does not need to be installed. At VM deploy time, you can select not to install the WinGA. The following example shows how to select the `provisionVmAgent` option with an Azure Resource Manager template:
```json {
To boot a VM you must have the PA installed on the VM, however the WinGA does no
} ```
-If you do not have the Agents installed, you cannot use some Azure services, such as Azure Backup or Azure Security. These services require an extension to be installed. If you have deployed a VM without the WinGA, you can install the latest version of the agent later.
+If you don't have the agents installed, you can't use some Azure services, such as Azure Backup or Azure Security. These services require an extension to be installed. If you deploy a VM without the WinGA, you can install the latest version of the agent later.
### Manual installation
-The Windows VM agent can be manually installed with a Windows installer package. Manual installation may be necessary when you create a custom VM image that is deployed to Azure. To manually install the Windows VM Agent, [download the VM Agent installer](https://github.com/Azure/WindowsVMAgent) and select the latest release. You can also search a specific version in the [GitHub Windows IaaS VM Agent releases](https://github.com/Azure/WindowsVMAgent/releases). The VM Agent is supported on Windows Server 2008 (64 bit) and later.
+
+You can manually install the Azure Windows VM Agent by using a Windows Installer package. Manual installation might be necessary when you create a custom VM image that's deployed to Azure.
+
+To manually install the Azure Windows VM Agent, [download the installer](https://github.com/Azure/WindowsVMAgent) and select the latest release. You can also search for a specific version in the [GitHub page for Azure Windows VM Agent releases](https://github.com/Azure/WindowsVMAgent/releases). The Azure Windows VM Agent is supported on Windows Server 2008 (64 bit) and later.
> [!NOTE]
-> It is important to update the AllowExtensionOperations option after manually installing the VMAgent on a VM that was deployed from image without ProvisionVMAgent enable.
+> It's important to update the `AllowExtensionOperations` option after you manually install the Azure Windows VM Agent on a VM that was deployed from image without `ProvisionVMAgent` enabled.
```powershell $vm.OSProfile.AllowExtensionOperations = $true $vm | Update-AzVM ```
-## Detect the VM Agent
+## Detect the Azure Windows VM Agent
### PowerShell
-The Azure Resource Manager PowerShell module can be used to retrieve information about Azure VMs. To see information about a VM, such as the provisioning state for the Azure VM Agent, use [Get-AzVM](/powershell/module/az.compute/get-azvm):
+You can use the Azure Resource Manager PowerShell module to get information about Azure VMs. To see information about a VM, such as the provisioning state for the Azure Windows VM Agent, use [Get-AzVM](/powershell/module/az.compute/get-azvm):
```powershell Get-AzVM ```
-The following condensed example output shows the *ProvisionVMAgent* property nested inside `OSProfile`. This property can be used to determine if the VM agent has been deployed to the VM:
+The following condensed example output shows the `ProvisionVMAgent` property nested inside `OSProfile`. You can use this property to determine if the VM agent has been deployed to the VM.
```powershell OSProfile :
OSProfile :
EnableAutomaticUpdates : True ```
-The following script can be used to return a concise list of VM names (running Windows OS) and the state of the VM Agent:
+Use the following script to return a concise list of VM names (running Windows OS) and the state of the Azure Windows VM Agent:
```powershell $vms = Get-AzVM
foreach ($vm in $vms) {
} ```
-The following script can be used to return a concise list of VM names (running Linux OS) and the state of the VM Agent:
+Use the following script to return a concise list of VM names (running Linux OS) and the state of the Azure Windows VM Agent:
```powershell $vms = Get-AzVM
foreach ($vm in $vms) {
} ```
-### Manual Detection
+### Manual detection
-When logged in to a Windows VM, Task Manager can be used to examine running processes. To check for the Azure VM Agent, open Task Manager, click the *Details* tab, and look for a process name **WindowsAzureGuestAgent.exe**. The presence of this process indicates that the VM agent is installed.
+When you're logged in to a Windows VM, you can use Task Manager to examine running processes. To check for the Azure Windows VM Agent, open Task Manager, select the **Details** tab, and look for a process named *WindowsAzureGuestAgent.exe*. The presence of this process indicates that the VM agent is installed.
+## Upgrade the Azure Windows VM Agent
-## Upgrade the VM Agent
-The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. The new versions are stored in Azure Storage, so please ensure you don't have firewalls blocking access. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time.
+The Azure Windows VM Agent for Windows is automatically upgraded on images deployed from Azure Marketplace. The new versions are stored in Azure Storage, so ensure that you don't have firewalls blocking access. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you installed the agent manually or are deploying custom VM images, you need to manually update to include the new VM agent at image creation time.
-## Windows Guest Agent Automatic Logs Collection
-Windows Guest Agent has a feature to automatically collect some logs. This feature is controlled by the CollectGuestLogs.exe process.
-It exists for both PaaS Cloud Services and IaaS Virtual Machines and its goal is to quickly & automatically collect some diagnostics logs from a VM - so they can be used for offline analysis.
-The collected logs are Event Logs, OS Logs, Azure Logs and some registry keys. It produces a ZIP file that is transferred to the VMΓÇÖs Host. This ZIP file can then be looked at by Engineering Teams and Support professionals to investigate issues on request of the customer owning the VM.
+## Azure Windows Guest Agent automatic log collection
-## Guest Agent and OSProfile certificates
-The Azure VM Agent is responsible for installing the certificates referenced in the `OSProfile` of a VM or Virtual Machine Scale Set.
-If you manually remove these certificates from the certificates MMC console inside the guest VM, it is expected that the guest agent will add them back.
-To permanently remove a certificate, you will have to remove it from the `OSProfile`, and then remove it from within the guest operating system.
+The Azure Windows Guest Agent has a feature to automatically collect some logs. The *CollectGuestLogs.exe* process controls this feature. It exists for both platform as a service (PaaS) cloud services and infrastructure as a service (IaaS) VMs. Its goal is to quickly and automatically collect diagnostics logs from a VM, so they can be used for offline analysis.
-For a Virtual Machine, use the [Remove-AzVMSecret]() to remove certificates from the `OSProfile`.
+The collected logs are event logs, OS logs, Azure logs, and some registry keys. The agent produces a ZIP file that's transferred to the VM's host. Engineering teams and support professionals can then use this ZIP file to investigate issues on the request of the customer who owns the VM.
-For more information on Virtual Machine Scale Set certificates, see [Virtual Machine Scale Sets - How do I remove deprecated certificates?](../../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#how-do-i-remove-deprecated-certificates-)
+## Azure Windows Guest Agent and OSProfile certificates
+The Azure Windows VM Agent installs the certificates referenced in the `OSProfile` value of a VM or a virtual machine scale set. If you manually remove these certificates from the Microsoft Management Console (MMC) Certificates snap-in inside the guest VM, the Azure Windows Guest Agent will add them back. To permanently remove a certificate, you have to remove it from `OSProfile`, and then remove it from within the guest operating system.
+
+For a virtual machine, use [Remove-AzVMSecret](/powershell/module/az.compute/remove-azvmsecret) to remove certificates from `OSProfile`.
+
+For more information on certificates for virtual machine scale sets, see [Azure Virtual Machine Scale Sets - How do I remove deprecated certificates?](../../virtual-machine-scale-sets/virtual-machine-scale-sets-faq.yml#how-do-i-remove-deprecated-certificates-).
## Next steps
-For more information about VM extensions, see [Azure virtual machine extensions and features overview](overview.md).
+
+For more information about VM extensions, see [Azure virtual machine extensions and features](overview.md).
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
description: Learn how to enable InfiniBand on Azure HPC VMs.
Previously updated : 03/10/2023 Last updated : 04/12/2023
There are various ways to enable InfiniBand on the capable VM sizes. ## VM Images with InfiniBand drivers+ See [VM Images](../configure.md#vm-images) for a list of supported VM Images on the Marketplace, which come pre-loaded with InfiniBand drivers (for SR-IOV or non-SR-IOV VMs) or can be configured with the appropriate drivers for [RDMA capable VMs](../sizes-hpc.md#rdma-capable-instances). The [CentOS-HPC](../configure.md#centos-hpc-vm-images) and [Ubuntu-HPC](../configure.md#ubuntu-hpc-vm-images) VM images in the Marketplace are the easiest way to get started. ## InfiniBand Driver VM Extensions+ On Linux, the [InfiniBandDriverLinux VM extension](hpc-compute-infiniband-linux.md) can be used to install the Mellanox OFED drivers and enable InfiniBand on the SR-IOV enabled HB-series and N-series VMs. On Windows, the [InfiniBandDriverWindows VM extension](hpc-compute-infiniband-windows.md) installs Windows Network Direct drivers (on non-SR-IOV VMs) or Mellanox OFED drivers (on SR-IOV VMs) for RDMA connectivity. In certain deployments of A8 and A9 instances, the HpcVmDrivers extension is added automatically. Note that the HpcVmDrivers VM extension is being deprecated; it will not be updated.
On Windows, the [InfiniBandDriverWindows VM extension](hpc-compute-infiniband-wi
To add the VM extension to a VM, you can use [Azure PowerShell](/powershell/azure/) cmdlets. For more information, see [Virtual machine extensions and features](overview.md). You can also work with extensions for VMs deployed in the [classic deployment model](/previous-versions/azure/virtual-machines/windows/classic/agents-and-extensions-classic). ## Manual installation+ [Mellanox OpenFabrics drivers (OFED)](https://www.mellanox.com/products/InfiniBand-VPI-Software) can be manually installed on the [SR-IOV enabled](../sizes-hpc.md#rdma-capable-instances) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs. ### Linux
-The [OFED drivers for Linux](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) can be installed with the example below. Though the example here is for RHEL/CentOS, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (16.04, 18.04 19.04, 20.04) and SLES (12 SP4 and 15). More examples for other distros are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-18.x/ubuntu-18.04-hpc/install_mellanoxofed.sh). The inbox drivers also work as well, but the Mellanox OFED drivers provide more features.
+
+The [OFED drivers for Linux](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed) can be installed with the example below. Though the example here is for RHEL/CentOS, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (18.04, 19.04, 20.04) and SLES (12 SP4+ and 15). More examples for other distros are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/blob/master/ubuntu/ubuntu-18.x/ubuntu-18.04-hpc/install_mellanoxofed.sh). The inbox drivers also work as well, but the Mellanox OFED drivers provide more features.
```bash MLNX_OFED_DOWNLOAD_URL=http://content.mellanox.com/ofed/MLNX_OFED-5.0-2.1.8.0/MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64.tgz
KERNEL=${KERNEL[-1]}
# Uncomment the lines below if you are running this on a VM #RELEASE=( $(cat /etc/centos-release | awk '{print $4}') ) #yum -y install http://olcentgbl.trafficmanager.net/centos/${RELEASE}/updates/x86_64/kernel-devel-${KERNEL}.rpm
-yum install -y kernel-devel-${KERNEL}
-./MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64/mlnxofedinstall --kernel $KERNEL --kernel-sources /usr/src/kernels/${KERNEL} --add-kernel-support --skip-repo
+sudo yum install -y kernel-devel-${KERNEL}
+sudo ./MLNX_OFED_LINUX-5.0-2.1.8.0-rhel7.7-x86_64/mlnxofedinstall --kernel $KERNEL --kernel-sources /usr/src/kernels/${KERNEL} --add-kernel-support --skip-repo
``` ### Windows+ For Windows, download and install the [Mellanox OFED for Windows drivers](https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2). ## Enable IP over InfiniBand (IB)
virtual-machines Export Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/export-templates.md
Title: Exporting Azure Resource Groups that contain VM extensions
-description: Export Resource Manager templates that include virtual machine extensions.
+ Title: Export Azure resource groups that contain VM extensions
+description: Learn how to export Resource Manager templates that include virtual machine extensions.
Last updated 03/29/2023
-# Exporting Resource Groups that contain VM extensions
+# Export resource groups that contain VM extensions
-Azure Resource Groups can be exported into a new Resource Manager template that can then be redeployed. The export process interprets existing resources, and creates a Resource Manager template that when deployed results in a similar Resource Group. When using the Resource Group export option against a Resource Group containing Virtual Machine extensions, several items need to be considered such as extension compatibility and protected settings.
+You can export Azure resource groups into a new Azure Resource Manager template that you can then redeploy. The export process interprets existing resources and creates a Resource Manager template that, when deployed, results in a similar resource group. When you're using the export option against a resource group that contains virtual machine (VM) extensions, you need to consider items such as extension compatibility and protected settings.
-This document details how the Resource Group export process works regarding virtual machine extensions, including a list of supported extensions, and details on handling secured data.
+This article details how the resource group export process works for virtual machine extensions. It includes a list of supported extensions and details on how to handle secured data.
-## Supported Virtual Machine Extensions
+## Supported VM extensions
-Many Virtual Machine extensions are available. Not all extensions can be exported into a Resource Manager template using the ΓÇ£Automation ScriptΓÇ¥ feature. If a virtual machine extension is not supported, it needs to be manually placed back into the exported template.
+Many VM extensions are available. You can't export all extensions into a Resource Manager template by using the automation script feature. If a virtual machine extension is not supported, you need to manually place it back into the exported template.
-The following extensions can be exported with the automation script feature.
+You can export the following extensions by using the automation script feature:
-> Acronis Backup, Acronis Backup Linux, Bg Info, BMC CTM Agent Linux, BMC CTM Agent Windows, Chef Client, Custom Script, Custom Script Extension, Custom Script for Linux, Datadog Linux Agent, Datadog Windows Agent, Docker Extension, DSC Extension, Dynatrace Linux, Dynatrace Windows, HPE Security Application Defender for Cloud, IaaS Antimalware, IaaS Diagnostics, Linux Chef Client, Linux Diagnostic, OS Patching For Linux, Puppet Agent, Site 24x7 Apm Insight, Site 24x7 Linux Server, Site 24x7 Windows Server, Trend Micro DSA, Trend Micro DSA Linux, VM Access For Linux, VM Access For Linux, VM Snapshot, VM Snapshot Linux
+> Acronis Backup, Acronis Backup Linux, BGInfo, BMC Control-M Agent Linux, BMC Control-M Agent Windows, Chef Client, Custom Script, Custom Script Extension, Custom Script for Linux, Datadog Linux Agent, Datadog Windows Agent, Docker Extension, DSC Extension, Dynatrace Linux, Dynatrace Windows, HPE Security Application Defender for Cloud, IaaS Antimalware, IaaS Diagnostics, Linux Chef Client, Linux Diagnostic, OS Patching for Linux, Puppet Agent, Site24x7 APM Insight, Site24x7 Linux Server, Site24x7 Windows Server, Trend Micro DSA, Trend Micro DSA Linux, VM Access For Linux, VM Access For Linux, VM Snapshot, VM Snapshot Linux
-## Export the Resource Group
+## Export the resource group
-To export a Resource Group into a reusable template, complete the following steps:
+To export a resource group into a reusable template, complete the following steps:
-1. Sign in to the Azure portal
-2. On the Hub Menu, click Resource Groups
-3. Select the target resource group from the list
-4. In the Resource Group blade, select **Export template** under the **Automation** section
+1. Sign in to the Azure portal.
+2. On the **Hub** menu, select **Resource Groups**.
+3. Select the target resource group from the list.
+4. On the **Resource group** pane, select **Export template** under the **Automation** section.
-![Template Export](./media/export-templates/template-export.png)
+![Screenshot that shows selections for exporting a resource group into a template.](./media/export-templates/template-export.png)
-The Azure Resource Manager automations script produces a Resource Manager template, a parameters file, and several sample deployment scripts such as PowerShell and Azure CLI. At this point, the exported template can be downloaded using the download button, added as a new template to the template library, or redeployed using the deploy button.
+The Azure Resource Manager automation script produces a Resource Manager template, a parameters file, and several sample deployment scripts, such as PowerShell and Azure CLI scripts. At this point, you can download the exported template by using the download button, add the template to the template library, or redeploy the template by using the **Deploy** button.
## Configure protected settings
-Many Azure virtual machine extensions include a protected settings configuration, that encrypts sensitive data such as credentials and configuration strings. Protected settings are not exported with the automation script. If necessary, protected settings need to be reinserted into the exported templated.
+Many Azure VM extensions include a protected settings configuration that encrypts sensitive data, such as credentials and configuration strings. Protected settings are not exported with the automation script. If necessary, reinsert protected settings into the exported template.
-### Step 1 - Remove template parameter
+### Step 1: Remove the template parameter
-When the Resource Group is exported, a single template parameter is created to provide a value to the exported protected settings. This parameter can be removed. To remove the parameter, look through the parameter list and delete the parameter that looks similar to this JSON example.
+When you export a resource group, a single template parameter is created to provide a value to the exported protected settings. You can remove this parameter.
+
+To remove the parameter, look through the list of parameters and delete the one that looks similar to this JSON example:
```json "extensions_extensionname_protectedSettings": {
When the Resource Group is exported, a single template parameter is created to p
} ```
-### Step 2 - Get protected settings properties
+### Step 2: Get properties for protected settings
-Because each protected setting has a set of required properties, a list of these properties need to be gathered. Each parameter of the protected settings configuration can be found in the [Azure Resource Manager schema on GitHub](https://raw.githubusercontent.com/Azure/azure-resource-manager-schemas/master/schemas/2015-08-01/Microsoft.Compute.json). This schema only includes the parameter sets for the extensions listed in the overview section of this document.
+Because each protected setting has a set of required properties, you need to gather a list of these properties. You can find each parameter of the protected settings configuration in the [Azure Resource Manager schema on GitHub](https://raw.githubusercontent.com/Azure/azure-resource-manager-schemas/master/schemas/2015-08-01/Microsoft.Compute.json). This schema includes only the parameter sets for the extensions that are listed in the overview section of this article.
-From within the schema repository, search for the desired extension, for this example `IaaSDiagnostics`. Once the extensions `protectedSettings` object has been located, take note of each parameter. In the example of the `IaasDiagnostic` extension, the require parameters are `storageAccountName`, `storageAccountKey`, and `storageAccountEndPoint`.
+From within the schema repository, search for the desired extension. After you find the extension's `protectedSettings` object, take note of each parameter. In the following example of the `IaasDiagnostic` extension, the required parameters are `storageAccountName`, `storageAccountKey`, and `storageAccountEndPoint`:
```json "protectedSettings": {
From within the schema repository, search for the desired extension, for this ex
} ```
-### Step 3 - Re-create the protected configuration
+### Step 3: Re-create the protected configuration
-On the exported template, search for `protectedSettings` and replace the exported protected setting object with a new one that includes the required extension parameters and a value for each one.
+On the exported template, search for `protectedSettings`. Replace the exported protected setting object with a new one that includes the required extension parameters and a value for each one.
In the example of the `IaasDiagnostic` extension, the new protected setting configuration would look like the following example:
The final extension resource looks similar to the following JSON example:
} ```
-If using template parameters to provide property values, these need to be created. When creating template parameters for protected setting values, make sure to use the `SecureString` parameter type so that sensitive values are secured. For more information on using parameters, see [Authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md).
+If you're using template parameters to provide property values, you need to create them. When you're creating template parameters for protected setting values, use the `SecureString` parameter type to help secure sensitive values. For more information on using parameters, see [Authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md).
-In the example of the `IaasDiagnostic` extension, the following parameters would be created in the parameters section of the Resource Manager template.
+In the example of the `IaasDiagnostic` extension, the following parameters would be created in the parameters section of the Resource Manager template:
```json "storageAccountName": {
In the example of the `IaasDiagnostic` extension, the following parameters would
} ```
-At this point, the template can be deployed using any template deployment method.
+At this point, you can deploy the template by using any template deployment method.
virtual-machines Extensions Rmpolicy Howto Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-cli.md
Previously updated : 07/05/2022 Last updated : 04/11/2023 # Use Azure Policy to restrict extensions installation on Linux VMs
In order to restrict what extensions are available, you need to create a [rule](
This example demonstrates how to deny the installation of disallowed VM extensions by defining a rules file in Azure Cloud Shell. However, if you're working in Azure CLI locally, you can create a local file and replace the path (~/clouddrive) with the path to the file on your local file system.
-In a [bash Cloud Shell](https://shell.azure.com/bash), type:
+1. In a [bash Cloud Shell](https://shell.azure.com/bash) create the file `~/clouddrive/azurepolicy.rules.json` using any text editor.
-```bash
-vim ~/clouddrive/azurepolicy.rules.json
-```
-
-Copy and paste the following `.json` data into the file.
+2. Copy and paste the following `.json` contents into the new file and save it.
```json {
Copy and paste the following `.json` data into the file.
} ```
-When you're finished, press **Esc**, and then type **:wq** to save and close the file.
- ## Create a parameters file You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the unauthorized extensions. This example shows you how to create a parameter file for Linux VMs in Cloud Shell.
-In the bash Cloud Shell opened before type:
-
-```bash
-vim ~/clouddrive/azurepolicy.parameters.json
-```
+1. In the bash Cloud Shell opened before, create the file ~/clouddrive/azurepolicy.parameters.json using any text editor.
-Copy and paste the following `.json` data into the file.
+2. Copy and paste the following `.json` contents into the new file and save it.
```json {
Copy and paste the following `.json` data into the file.
} ```
-When you're finished, press **Esc**, and then type **:wq** to save and close the file.
- ## Create the policy A _policy definition_ is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create the policy definition using [az policy definition create](/cli/azure/role/assignment).
Test the policy by creating a new VM and adding a new user.
az vm create \ --resource-group myResourceGroup \ --name myVM \
- --image UbuntuLTS \
+ --image myImage \
--generate-ssh-keys ```
+> [!NOTE]
+> Replace `myResourceGroup`, `myVM` and `myImage` values accordingly.
+ Try to create a new user named **myNewUser** using the VM Access extension. ```azurecli-interactive
az vm user update \
```azurecli-interactive az policy assignment delete --name 'not-allowed-vmextension-linux' --resource-group myResourceGroup ```+ ## Remove the policy ```azurecli-interactive
virtual-machines Extensions Rmpolicy Howto Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-ps.md
Previously updated : 03/20/2023 Last updated : 04/11/2023 # Use Azure Policy to restrict extensions installation on Windows VMs
-If you want to prevent the use or installation of certain extensions on your Windows VMs, you can create an Azure Policy definition using PowerShell to restrict extensions for VMs within a resource group.
+If you want to prevent the use or installation of certain extensions on your Windows VMs, you can create an Azure Policy definition using PowerShell to restrict extensions for VMs within a resource group.
This tutorial uses Azure PowerShell within the Cloud Shell, which is constantly updated to the latest version.
-
- ## Create a rules file In order to restrict what extensions can be installed, you need to have a [rule](../../governance/policy/concepts/definition-structure.md#policy-rule) to provide the logic to identify the extension. This example shows you how to deny extensions published by 'Microsoft. Compute' by creating a rules file in Azure Cloud Shell, but if you're working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
-In a [Cloud Shell](https://shell.azure.com/powershell), type:
-
-```azurepowershell-interactive
-nano $home/clouddrive/rules.json
-```
+1. In a [Cloud Shell](https://shell.azure.com/powershell), create the file `$home/clouddrive/rules.json` using any text editor.
-Copy and paste the following .json into the file.
+2. Copy and paste the following .json contents into the file and save it:
```json {
Copy and paste the following .json into the file.
} ```
-When you're done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
- ## Create a parameters file
-You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the extensions to block.
+You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the extensions to block.
This example shows you how to create a parameters file for VMs in Cloud Shell, but if you're working in PowerShell locally, you can also create a local file and replace the path ($home/clouddrive) with the path to the local file on your machine.
-In [Cloud Shell](https://shell.azure.com/powershell), type:
-
-```azurepowershell-interactive
-nano $home/clouddrive/parameters.json
-```
+1. In [Cloud Shell](https://shell.azure.com/powershell), create the file `$home/clouddrive/parameters.json` using any text editor.
-Copy and paste the following .json into the file.
+2. Copy and paste the following .json contents into the file and save it:
```json {
Copy and paste the following .json into the file.
} ```
-When you're done, hit the **Ctrl + O** and then **Enter** to save the file. Hit **Ctrl + X** to close the file and exit.
- ## Create the policy A policy definition is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create a policy definition using the [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition) cmdlet. -
- The policy rules and parameters are the files you created and stored as .json files in your cloud shell. Replace the example `-Policy` and `-Parameter` file paths as needed.
--
+The policy rules and parameters are the files you created and stored as .json files in your cloud shell. Replace the example `-Policy` and `-Parameter` file paths as needed.
```azurepowershell-interactive $definition = New-AzPolicyDefinition `
$definition = New-AzPolicyDefinition `
-Parameter 'C:\Users\ContainerAdministrator\clouddrive\parameters.json' ``` --- ## Assign the policy
-This example assigns the policy to a resource group using [New-AzPolicyAssignment](/powershell/module/az.resources/new-azpolicyassignment). Any VM created in the **myResourceGroup** resource group won't be able to install the VM Access Agent or Custom Script extensions.
+This example assigns the policy to a resource group using [New-AzPolicyAssignment](/powershell/module/az.resources/new-azpolicyassignment). Any VM created in the **myResourceGroup** resource group won't be able to install the VM Access Agent or Custom Script extensions.
Use the [Get-AzSubscription | Format-Table](/powershell/module/az.accounts/get-azsubscription) cmdlet to get your subscription ID to use in place of the one in the example.
Remove-AzPolicyAssignment -Name not-allowed-vmextension-windows -Scope $scope
```azurepowershell-interactive Remove-AzPolicyDefinition -Name not-allowed-vmextension-windows ```
-
+ ## Next steps For more information, see [Azure Policy](../../governance/policy/overview.md).
virtual-machines Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-windows.md
The extension-handling code is responsible for the following tasks:
- Communicate with the Azure fabric. - Handle the VM extension operations, such as installations, reporting status, updating the individual extensions, and removing extensions. Updates contain security fixes, bug fixes, and enhancements to the extension-handling code.
-To check what version you're running, see [Detect the Azure VM Agent](agent-windows.md#detect-the-vm-agent).
+To check what version you're running, see [Detect the Azure VM Agent](agent-windows.md#detect-the-azure-windows-vm-agent).
#### Extension updates
virtual-machines Hpccompute Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-windows.md
Title: NVIDIA GPU Driver Extension - Azure Windows VMs
-description: Azure extension for installing NVIDIA GPU drivers on N-series compute VMs running Windows.
+description: Learn how to install the NVIDIA GPU Driver Extension on N-series virtual machines running Windows from the Azure portal or by using an ARM template.
vm-windows Previously updated : 10/14/2021 Last updated : 04/06/2023 + # NVIDIA GPU Driver Extension for Windows
-This extension installs NVIDIA GPU drivers on Windows N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup.
+The NVIDIA GPU Driver Extension for Windows installs NVIDIA GPU drivers on Windows N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you accept and agree to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup.
+
+The instructions for manual installation of the drivers, and the list of current supported versions are available for review. For more information, see [Install NVIDIA GPU drivers on N-series VMs running Windows](/azure/virtual-machines/windows/n-series-driver-setup).
-Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series NVIDIA GPU driver setup for Windows](../windows/n-series-driver-setup.md).
-An extension is also available to install NVIDIA GPU drivers on [Linux N-series VMs](hpccompute-gpu-linux.md).
+The NVIDIA GPU Driver Extension can also be deployed on Linux N-series VMs. For more information, see [NVIDIA GPU Driver Extension for Linux](hpccompute-gpu-linux.md).
## Prerequisites
-### Operating system
+Confirm your virtual machine satisfies the prerequisites for using the NVIDIA GPU Driver Extension.
-This extension supports the following OSs:
+### Operating system support
+
+The NVIDIA GPU Driver Extension supports the following Windows versions:
| Distribution | Version |
-|||
+| | |
+| Windows 11 | Core |
| Windows 10 | Core |
+| Windows Server 2022 | Core |
| Windows Server 2019 | Core | | Windows Server 2016 | Core | | Windows Server 2012 R2 | Core |
-### Internet connectivity
+### Internet connection required
-The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet and has access.
+The NVIDIA GPU Driver Extension requires that the target VM is connected to the internet and has access.
-## Extension schema
+## Review the extension schema
-The following JSON shows the schema for the extension:
+The following JSON snippet shows the schema for the extension:
```json {
The following JSON shows the schema for the extension:
### Properties
+The JSON schema includes values for the following parameters.
+ | Name | Value/Example | Data type |
-| - | - | - |
-| apiVersion | 2015-06-15 | date |
-| publisher | Microsoft.HpcCompute | string |
-| type | NvidiaGpuDriverWindows | string |
-| typeHandlerVersion | 1.4 | int |
+| | | |
+| `apiVersion` | 2015-06-15 | date |
+| `publisher` | Microsoft.HpcCompute | string |
+| `type` | NvidiaGpuDriverWindows | string |
+| `typeHandlerVersion` | 1.4 | int |
-## Deployment
+## Deploy the extension
+
+Azure VM extensions can be managed by using the Azure CLI, PowerShell, Azure Resource Manager (ARM) templates, and the Azure portal.
+
+> [!Note]
+> Some of the following examples use `<placeholder>` parameter values in the commands. Before you run each command, make sure to replace any placeholder values with specific values for your configuration.
### Azure portal
-You can deploy Azure NVIDIA VM extensions in the Azure portal.
+To install the NVIDIA GPU Driver Extension in the Azure portal, follow these steps:
-1. In a browser, go to the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to the virtual machine on which you want to install the extension.
-1. Go to the virtual machine on which you want to install the driver.
+1. Under **Settings**, select **Extensions + Applications**.
-1. On the left menu, select **Extensions**.
+ :::image type="content" source="./media/nvidia-ext-portal/extensions-menu.png" alt-text="Screenshot that shows how to select Extensions + Applications for a virtual machine in the Azure portal." border="false":::
- :::image type="content" source="./media/nvidia-ext-portal/extensions-menu.png" alt-text="Screenshot that shows selecting Extensions in the Azure portal menu.":::
+1. Under **Extensions**, select **+ Add**.
-1. Select **Add**.
+ :::image type="content" source="./media/nvidia-ext-portal/add-extension.png" alt-text="Screenshot that shows how to add an extension for a virtual machine in the Azure portal." border="false":::
- :::image type="content" source="./media/nvidia-ext-portal/add-extension.png" alt-text="Screenshot that shows adding a V M extension for the selected V M.":::
+1. Locate and select **NVIDIA GPU Driver Extension**, then select **Next**.
-1. Scroll to find and select **NVIDIA GPU Driver Extension**, and then select **Next**.
+ :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension.png" alt-text="Screenshot that shows how to locate and select the NVIDIA GPU Driver Extension for a virtual machine in the Azure portal." border="false":::
- :::image type="content" source="./media/nvidia-ext-portal/select-nvidia-extension.png" alt-text="Screenshot that shows selecting NVIDIA G P U Driver Extension.":::
+1. Select **Review + create**. Confirm the deployment action, and select **Create**.
-1. Select **Review + create**, and select **Create**. Wait a few minutes for the driver to deploy.
+ Wait a few minutes for the extension to deploy.
- :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension.png" alt-text="Screenshot that shows selecting the Review + create button.":::
+ :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension.png" alt-text="Screenshot that shows how to create the NVIDIA GPU Driver Extension on the selected virtual machine in the Azure portal." border="false":::
-1. Verify that the extension was added to the list of installed extensions.
+1. Confirm the extension is listed as an installed extension for the virtual machine.
- :::image type="content" source="./media/nvidia-ext-portal/verify-extension.png" alt-text="Screenshot that shows the new extension in the list of extensions for the V M.":::
+ :::image type="content" source="./media/nvidia-ext-portal/verify-extension.png" alt-text="Screenshot that shows the NVIDIA GPU Driver Extension in the list of extensions for the virtual machine in the Azure portal." border="false":::
-### Azure Resource Manager template
+### ARM template
-You can use Azure Resource Manager templates to deploy Azure VM extensions. Templates are ideal when you deploy one or more virtual machines that require post-deployment configuration.
+ARM templates are ideal when you deploy one or more virtual machines that require post-deployment configuration.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource or placed at the root or top level of a JSON ARM template. The placement of the JSON configuration affects the value of the resource `name` and `type`. For more information, see [Set name and type for child resources](/azure/azure-resource-manager/templates/child-resource-name-type).
The following example assumes the extension is nested inside the virtual machine resource. When the extension resource is nested, the JSON is placed in the `"resources": []` object of the virtual machine. ```json {
- "name": "myExtensionName",
+ "name": "<myExtensionName>",
"type": "extensions",
- "location": "[resourceGroup().location]",
+ "location": "[<resourceGroup().location>]",
"apiVersion": "2015-06-15", "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', myVM)]"
+ "[concat('Microsoft.Compute/virtualMachines/', <myVM>)]"
], "properties": { "publisher": "Microsoft.HpcCompute",
The following example assumes the extension is nested inside the virtual machine
### PowerShell
+Use the following PowerShell command to deploy the NVIDIA GPU Driver Extension to a virtual machine.
+ ```powershell Set-AzVMExtension
- -ResourceGroupName "myResourceGroup" `
- -VMName "myVM" `
- -Location "southcentralus" `
+ -ResourceGroupName "<myResourceGroup>" `
+ -VMName "<myVM>" `
+ -Location "<location>" `
-Publisher "Microsoft.HpcCompute" ` -ExtensionName "NvidiaGpuDriverWindows" ` -ExtensionType "NvidiaGpuDriverWindows" `
Set-AzVMExtension
### Azure CLI
+Run the following command in the Azure CLI to deploy the NVIDIA GPU Driver Extension to a virtual machine.
+ ```azurecli az vm extension set \
- --resource-group myResourceGroup \
- --vm-name myVM \
+ --resource-group <myResourceGroup> \
+ --vm-name <myVM> \
--name NvidiaGpuDriverWindows \ --publisher Microsoft.HpcCompute \ --version 1.4 \
az vm extension set \
}' ```
-## Troubleshoot and support
+## <a name="troubleshoot-and-support"></a> Troubleshoot issues
+
+Here are some suggestions for how to troubleshoot deployment issues.
+
+### Check extension status
-### Troubleshoot
+Check the status of your extension deployment in the Azure portal, or by using PowerShell or the Azure CLI.
-You can retrieve data about the state of extension deployments from the Azure portal and by using Azure PowerShell and the Azure CLI. To see the deployment state of extensions for a given VM, run the following command:
+To see the deployment state of extensions for a given VM, run the following commands:
```powershell
-Get-AzVMExtension -ResourceGroupName myResourceGroup -VMName myVM -Name myExtensionName
+Get-AzVMExtension -ResourceGroupName <myResourceGroup> -VMName <myVM> -Name <myExtensionName>
``` ```azurecli
-az vm extension list --resource-group myResourceGroup --vm-name myVM -o table
+az vm extension list --resource-group <myResourceGroup> --vm-name <myVM> -o table
```
+### Review output logs
-Extension execution output is logged to the following directory:
+View output logs for the NVIDIA GPU Driver Extension deployment under
+`C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\`.
-```cmd
-C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\
-```
+### Respond to error codes
-### Error codes
+The following table lists common error codes for deployment and potential follow-up actions.
-| Error Code | Meaning | Possible action |
+| Error | Description | Action |
| :: | | |
-| 0 | Operation successful. |
-| 1 | Operation successful. Reboot required. |
-| 100 | Operation not supported or couldn't be completed. | Possible causes are that the PowerShell version isn't supported, the VM size isn't an N-series VM, or a failure occurred in downloading data. Check the log files to determine the cause of the error. |
+| 0 | Operation successful. | No required action. |
+| 1 | Operation successful. | Reboot. |
+| 100 | Operation not supported or couldn't be completed. | Check log files to determine cause of error, such as: <br>- PowerShell version isn't supported. <br> - VM size isn't an N-series VM. <br> - Failure during data download. |
| 240, 840 | Operation timeout. | Retry operation. |
-| -1 | Exception occurred. | Check the log files to determine the cause of the exception. |
-| -5x | Operation interrupted due to pending reboot. | Reboot VM. Installation continues after the reboot. Uninstall should be invoked manually. |
+| -1 | Exception occurred. | Check log files to determine cause of exception. |
+| -5x | Operation interrupted due to pending reboot. | Reboot the VM. Installation continues after reboot. <br> Uninstall should be invoked manually. |
+
+### Get support
+
+Here are some other options to help you resolve deployment issues:
+
+- For assistance, contact the Azure experts on the [Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/).
-### Support
+- If you don't find an answer on the site, you can post a question for input from Microsoft or other members of the community.
-If you need more help at any point in this article, contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/). Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select **Get support**. For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/faq/).
+- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).
## Next steps - For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).-- For more information about N-series VMs, see [GPU optimized virtual machine sizes](../sizes-gpu.md).
+- For more information about N-series VMs, see [GPU optimized virtual machine sizes](/azure/virtual-machines/sizes-gpu).
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
Title: Azure Key Vault VM Extension for Windows
-description: Deploy an agent performing automatic refresh of Key Vault secrets on virtual machines using a virtual machine extension.
+ Title: Azure Key Vault VM extension for Windows
+description: Learn how to deploy an agent for automatic refresh of Azure Key Vault secrets on virtual machines with a virtual machine extension.
tags: keyvault
Previously updated : 12/02/2019 Last updated : 04/11/2023
-# Key Vault virtual machine extension for Windows
-The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the extension monitors a list of observed certificates stored in key vaults, and, upon detecting a change, retrieves, and installs the corresponding certificates. This document details the supported platforms, configurations, and deployment options for the Key Vault VM extension for Windows.
+# Azure Key Vault virtual machine extension for Windows
-### Operating system
+The Azure Key Vault virtual machine (VM) extension provides automatic refresh of certificates stored in an Azure key vault. The extension monitors a list of observed certificates stored in key vaults. When it detects a change, the extension retrieves and installs the corresponding certificates. This article describes the supported platforms, configurations, and deployment options for the Key Vault VM extension for Windows.
-The Key Vault VM extension supports below versions of Windows:
+## Operating systems
+
+The Key Vault VM extension supports the following versions of Windows:
- Windows Server 2022 - Windows Server 2019 - Windows Server 2016 - Windows Server 2012
-The Key Vault VM extension is also supported on custom local VM that is uploaded and converted into a specialized image for use in Azure using Windows Server 2019 core install.
-
-> [!NOTE]
-> The Key Vault VM extension downloads all the certificates in the windows certificate store or to the location provided by "certificateStoreLocation" property in the VM extension settings.
+The Key Vault VM extension is also supported on a custom local VM. The VM should be uploaded and converted into a specialized image for use in Azure by using Windows Server 2019 core install.
+### Supported certificates
-### Supported certificate content types
+The Key Vault VM extension supports the following certificate content types:
- PKCS #12 - PEM
+> [!NOTE]
+> The Key Vault VM extension downloads all certificates to the Windows certificate store or to the location specified in the `certificateStoreLocation` property in the VM extension settings.
+ ## Updates in Version 3.0 -- Ability to add ACL permission to downloaded certificates-- Certificate Store configuration per certificate-- Exportable private keys
+Version 3.0 of the Key Vault VM extension for Windows adds support for the following features:
+
+- Add ACL permissions to downloaded certificates
+- Enable Certificate Store configuration per certificate
+- Export private keys
## Prerequisites
- - Key Vault instance with certificate. See [Create a Key Vault](../../key-vault/general/quick-create-portal.md)
- - VM must have assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md)
- - 'Key Vault Secrets User' role assigned on Key Vault scope for VM/VMSS managed identity to retrieve a secret's portion of certificate. See [How to Authenticate to Key Vault](../../key-vault/general/authentication.md) and [Use and Azure RBAC for managing access to keys,secrets, and certificates](../../key-vault/general/rbac-guide.md).
- - Virtual Machine Scale Sets should have the following identity setting:
-
- ```
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "[parameters('userAssignedIdentityResourceId')]": {}
- }
- }
- ```
+Review the following prerequisites for using the Key Vault VM extension for Windows:
+
+- An Azure Key Vault instance with a certificate. For more information, see [Create a key vault by using the Azure portal](/azure/key-vault/general/quick-create-portal).
+
+- A VM with an assigned [managed identity](/azure/active-directory/managed-identities-azure-resources/overview).
+
+- The **Key Vault Secrets User** role must be assigned at the Key Vault scope level for VMs and Azure Virtual Machine Scale Sets managed identity. This role retrieves a secret's portion of a certificate. For more information, see the following articles:
+ - [Authentication in Azure Key Vault](/azure/key-vault/general/authentication)
+ - [Use Azure RBAC secret, key, and certificate permissions with Azure Key Vault](/azure/key-vault/general/rbac-guide#using-azure-rbac-secret-key-and-certificate-permissions-with-key-vault)
+ - [Key Vault scope role assignment](/azure/key-vault/general/rbac-guide?tabs=azure-cli#key-vault-scope-role-assignment)
+
+- Virtual Machine Scale Sets should have the following `identity` configuration:
+
+ ```json
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[parameters('userAssignedIdentityResourceId')]": {}
+ }
+ }
+ ```
- - AKV extension should have this setting:
+- The Key Vault VM extension should have the following `authenticationSettings` configuration:
- ```
- "authenticationSettings": {
- "msiEndpoint": "[parameters('userAssignedIdentityEndpoint')]",
- "msiClientId": "[reference(parameters('userAssignedIdentityResourceId'), variables('msiApiVersion')).clientId]"
- }
- ```
+ ```json
+ "authenticationSettings": {
+ "msiEndpoint": "[parameters('userAssignedIdentityEndpoint')]",
+ "msiClientId": "[reference(parameters('userAssignedIdentityResourceId'), variables('msiApiVersion')).clientId]"
+ }
+ ```
> [!NOTE]
-> The old Access Policy permission model is also supported for providing access to VM/VMSS. It requires policy with 'get' and 'list' permissions on secrets, see [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md).
+> The old access policy permission model can also be used to provide access to VMs and Virtual Machine Scale Sets. This method requires policy with **get** and **list** permissions on secrets. For more information, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy).
+ ## Extension schema
-The following JSON shows the schema for the Key Vault VM extension. The extension doesn't require protected settings - all its settings are considered public information.
+The following JSON shows the schema for the Key Vault VM extension. Before you consider the schema implementation options, review the following important notes.
+
+- The extension doesn't require protected settings. All settings are considered public information.
+
+- Observed certificates URLs should be of the form `https://myVaultName.vault.azure.net/secrets/myCertName`.
+
+ This form is preferred because the `/secrets` path returns the full certificate, including the private key, but the `/certificates` path doesn't. For more information about certificates, see [Azure Key Vault keys, secrets and certificates overview](/azure/key-vault/general/about-keys-secrets-certificates).
+
+- The `authenticationSettings` property is **required** for VMs with any **user assigned identities**.
+
+ This property specifies the identity to use for authentication to Key Vault. Define this property with a system-assigned identity to avoid issues with a VM extension with multiple identities.
### [Version-3.0](#tab/version3) ```json
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "KVVMExtensionForWindows",
- "apiVersion": "2022-08-01",
- "location": "<location>",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
- ],
- "properties": {
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "KVVMExtensionForWindows",
+ "apiVersion": "2022-08-01",
+ "location": "<location>",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
+ ],
+ "properties": {
"publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForWindows", "typeHandlerVersion": "3.0", "autoUpgradeMinorVersion": true, "settings": {
- "secretsManagementSettings": {
- "pollingIntervalInS": <string specifying polling interval in seconds, e.g: "3600">,
- "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: true>,
- "requireInitialSync": <initial synchronization of certificates e..g: true>,
- "observedCertificates": <array of KeyVault URIs representing monitored certificates including certificate store location and ACL permission to certificate private key, e.g.:
- [
+ "secretsManagementSettings": {
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,
+ "requireInitialSync": <Initial synchronization of certificates. Example: true>,
+ "observedCertificates": <An array of KeyVault URIs that represent monitored certificates, including certificate store location and ACL permission to certificate private key. Example:
+ [
{
- "url": <Key Vault URI to secret portion of certificate e.g.: "https://myvault.vault.azure.net/secrets/mycertificate1">,
- "certificateStoreName": <certificate store name, e.g.:"MY">,
- "certificateStoreLocation": <certificate store location, currently it works locally only e.g.:"LocalMachine">,
- "accounts": <optional array of preferred accounts with read access to certificate private keys, Administrators and SYSTEM gets Full Control by default e.g.: ["Network Service", "Local Service"]>
+ "url": <A Key Vault URI to the secret portion of the certificate. Example: "https://myvault.vault.azure.net/secrets/mycertificate1">,
+ "certificateStoreName": <The certificate store name. Example: "MY">,
+ "certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
+ "accounts": <Optional. An array of preferred accounts with read access to certificate private keys. Administrators and SYSTEM get Full Control by default. Example: ["Network Service", "Local Service"]>
}, {
- "url": <Key Vault URI to secret portion of certificate e.g.: "https://myvault.vault.azure.net/secrets/mycertificate2">,
- "certificateStoreName": <certificate store name, e.g.:"MY">,
- "certificateStoreLocation": <certificate store location, currently it works locally only e.g.:"CurrentUser">,
- "keyExportable": <optional property to set private key to be exportable e.g.: "false">
- "accounts": <optional array of preferred accounts with read access to certificate private keys, Administrators and SYSTEM gets Full Control by default e.g.: ["Local Service"]>
- },
-
- ]>
- },
- "authenticationSettings": {
- "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
- "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
- }
- }
+ "url": <Example: "https://myvault.vault.azure.net/secrets/mycertificate2">,
+ "certificateStoreName": <Example: "MY">,
+ "certificateStoreLocation": <Example: "CurrentUser">,
+ "keyExportable": <Optional. Lets the private key be exportable. Example: "false">,
+ "accounts": <Example: ["Local Service"]>
+ }
+ ]>
+ },
+ "authenticationSettings": {
+ "msiEndpoint": <Required when the msiClientId property is used. Specifies the MSI endpoint. Example for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
+ "msiClientId": <Required when the VM has any user assigned identities. Specifies the MSI identity. Example: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ }
}
- }
+ }
+}
``` ### [Version-1.0](#tab/version1) ```json
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "KVVMExtensionForWindows",
- "apiVersion": "2022-08-01",
- "location": "<location>",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
- ],
- "properties": {
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "KVVMExtensionForWindows",
+ "apiVersion": "2022-08-01",
+ "location": "<location>",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
+ ],
+ "properties": {
"publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForWindows", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": true, "settings": {
- "secretsManagementSettings": {
- "pollingIntervalInS": <string specifying polling interval in seconds, e.g: "3600">,
- "certificateStoreName": <certificate store name, e.g.: "MY">,
- "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: false>,
- "certificateStoreLocation": <certificate store location, currently it works locally only e.g.: "LocalMachine">,
- "requireInitialSync": <initial synchronization of certificates e..g: true>,
- "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "[https://myvault.vault.azure.net/secrets/mycertificate]">
- },
- "authenticationSettings": {
- "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
- "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
- }
- }
+ "secretsManagementSettings": {
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "certificateStoreName": <The certificate store name. Example: "MY">,
+ "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,"certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
+ "requireInitialSync": <Require an initial synchronization of the certificates. Example: true>,
+ "observedCertificates": <A string array of KeyVault URIs that represent the monitored certificates. Example: "[https://myvault.vault.azure.net/secrets/mycertificate"]>
+ },
+ "authenticationSettings": {
+ "msiEndpoint": <Required when the msiClientId property is used. Specifies the MSI endpoint. Example for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
+ "msiClientId": <Required when the VM has any user assigned identities. Specifies the MSI identity. Example: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ }
}
- }
+ }
+}
```-
-> [!NOTE]
-> Your observed certificates URLs should be of the form `https://myVaultName.vault.azure.net/secrets/myCertName`.
->
-> This is because the `/secrets` path returns the full certificate, including the private key, while the `/certificates` path does not. More information about certificates can be found here: [Key Vault Certificates](../../key-vault/general/about-keys-secrets-certificates.md)
-
-> [!IMPORTANT]
-> The 'authenticationSettings' property is **required** for VMs with any **user assigned identities** and recommended to use with a system-assigned identity, to avoid issues with VM extension with multiple identities.
-> It specifies identity to use for authentication to Key Vault.
+ ## Property values
+The JSON schema includes the following properties.
+ ### [Version-3.0](#tab/version3)
-| Name | Value / Example | Data Type |
-| - | - | - |
-| apiVersion | 2022-08-01 | date |
-| publisher | Microsoft.Azure.KeyVault | string |
-| type | KeyVaultForWindows | string |
-| typeHandlerVersion | 3.0 | int |
-| pollingIntervalInS | 3600 | string |
-| linkOnRenewal (optional) | true | boolean |
-| requireInitialSync (optional) | false | boolean |
-| observedCertificates | [{...}, {...}] | string array |
-| observedCertificates/url | "https://myvault.vault.azure.net/secrets/mycertificate" | string |
-| observedCertificates/certificateStoreName | MY | string |
-| observedCertificates/certificateStoreLocation | LocalMachine or CurrentUser (case sensitive) | string |
-| observedCertificates/keyExportable(optional) | false | boolean |
-| observedCertificates/accounts(optional) | ["Network Service", "Local Service"] | string array |
-| msiEndpoint | http://169.254.169.254/metadata/identity/oauth2/token | string |
-| msiClientId | c7373ae5-91c2-4165-8ab6-7381d6e75619 | string |
+| Name | Value/Example | Data type |
+| | | |
+| `apiVersion` | 2022-08-01 | date |
+| `publisher` | Microsoft.Azure.KeyVault | string |
+| `type` | KeyVaultForWindows | string |
+| `typeHandlerVersion` | 3.0 | int |
+| `pollingIntervalInS` | 3600 | string |
+| `linkOnRenewal` (optional) | true | boolean |
+| `requireInitialSync` (optional) | false | boolean |
+| `observedCertificates` | [{...}, {...}] | string array |
+| `observedCertificates/url` | "https://myvault.vault.azure.net/secrets/mycertificate" | string |
+| `observedCertificates/certificateStoreName` | MY | string |
+| `observedCertificates/certificateStoreLocation` | LocalMachine or CurrentUser (case sensitive) | string |
+| `observedCertificates/keyExportable` (optional) | false | boolean |
+| `observedCertificates/accounts` (optional) | ["Network Service", "Local Service"] | string array |
+| `msiEndpoint` | "http://169.254.169.254/metadata/identity/oauth2/token" | string |
+| `msiClientId` | c7373ae5-91c2-4165-8ab6-7381d6e75619 | string |
### [Version-1.0](#tab/version1) -
-| Name | Value / Example | Data Type |
-| - | - | - |
-| apiVersion | 2022-08-01 | date |
-| publisher | Microsoft.Azure.KeyVault | string |
-| type | KeyVaultForWindows | string |
-| typeHandlerVersion | 1.0 | int |
-| pollingIntervalInS | 3600 | string |
-| certificateStoreName | MY | string |
-| linkOnRenewal | true | boolean |
-| certificateStoreLocation | LocalMachine or CurrentUser (case sensitive) | string |
-| requireInitialSync | false | boolean |
-| observedCertificates | ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"] | string array
-| msiEndpoint | http://169.254.169.254/metadata/identity/oauth2/token | string |
-| msiClientId | c7373ae5-91c2-4165-8ab6-7381d6e75619 | string |
+| Name | Value/Example | Data type |
+| | | |
+| `apiVersion` | 2022-08-01 | date |
+| `publisher` | Microsoft.Azure.KeyVault | string |
+| `type` | KeyVaultForWindows | string |
+| `typeHandlerVersion` | 1.0 | int |
+| `pollingIntervalInS` | 3600 | string |
+| `certificateStoreName` | MY | string |
+| `linkOnRenewal` | true | boolean |
+| `certificateStoreLocation` | LocalMachine or CurrentUser (case sensitive) | string |
+| `requireInitialSync` | false | boolean |
+| `observedCertificates` | ["https://myvault.vault.azure.net/secrets/mycertificate", <br> "https://myvault.vault.azure.net/secrets/mycertificate2"] | string array
+| `msiEndpoint` | "http://169.254.169.254/metadata/identity/oauth2/token" | string |
+| `msiClientId` | c7373ae5-91c2-4165-8ab6-7381d6e75619 | string |
## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be deployed to individual VMs or Virtual Machine Scale Sets. The schema and configuration are common to both template types.
+Azure VM extensions can be deployed with Azure Resource Manager (ARM) templates. Templates are ideal when deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be deployed to individual VMs or Virtual Machine Scale Sets instances. The schema and configuration are common to both template types.
+
+The JSON configuration for a key vault extension is nested inside the VM or Virtual Machine Scale Sets template. For a VM resource extension, the configuration is nested under the `"resources": []` virtual machine object. For a Virtual Machine Scale Sets instance extension, the configuration is nested under the `"virtualMachineProfile":"extensionProfile":{"extensions" :[]` object.
-The JSON configuration for a key vault extension is nested inside the virtual machine or Virtual Machine Scale Set template. For Virtual Machine resource extension is nested under `"resources": []` virtual machine object and Virtual Machine Scale Set under `"virtualMachineProfile":"extensionProfile":{"extensions" :[]` object.
+The following JSON snippets provide example settings for an ARM template deployment of the Key Vault VM extension.
### [Version-3.0](#tab/version3) ```json
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "KeyVaultForWindows",
- "apiVersion": "2022-08-01",
- "location": "<location>",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
- ],
- "properties": {
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "KeyVaultForWindows",
+ "apiVersion": "2022-08-01",
+ "location": "<location>",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
+ ],
+ "properties": {
"publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForWindows", "typeHandlerVersion": "3.0", "autoUpgradeMinorVersion": true, "settings": {
- "secretsManagementSettings": {
- "pollingIntervalInS": <string specifying polling interval in seconds, e.g: "3600">,
- "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: true>,
- "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.:
- [
- {
- "url": "https://<examplekv>.vault.azure.net/secrets/aaaa",
- "certificateStoreName": "MY",
- "certificateStoreLocation": "LocalMachine",
- "accounts": [
- "Network Service"
- ]
- },
- {
- "url": "https://examplekv>.vault.azure.net/secrets/bbbb",
- "certificateStoreName": "MY",
- "certificateStoreLocation": "LocalMachine",
- "keyExportable": true,
- "accounts": [
- "Network Service",
- "Local Service"
- ]
- },
- {
- "url": "https://<examplekv>.vault.azure.net/secrets/cccc",
- "certificateStoreName": "TrustedPeople",
- "certificateStoreLocation": "LocalMachine"
- }
- ]>
- },
- "authenticationSettings": {
- "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
- "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
- }
- }
+ "secretsManagementSettings": {
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,
+ "observedCertificates": <An array of KeyVault URIs that represent monitored certificates, including certificate store location and ACL permission to certificate private key. Example:
+ [
+ {
+ "url": <A Key Vault URI to the secret portion of the certificate. Example: "https://myvault.vault.azure.net/secrets/mycertificate1">,
+ "certificateStoreName": <The certificate store name. Example: "MY">,
+ "certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
+ "accounts": <Optional. An array of preferred accounts with read access to certificate private keys. Administrators and SYSTEM get Full Control by default. Example: ["Network Service", "Local Service"]>
+ },
+ {
+ "url": <Example: "https://myvault.vault.azure.net/secrets/mycertificate2">,
+ "certificateStoreName": <Example: "MY">,
+ "certificateStoreLocation": <Example: "CurrentUser">,
+ "keyExportable": <Optional. Lets the private key be exportable. Example: "false">,
+ "accounts": <Example: ["Local Service"]>
+ },
+ {
+ "url": <Example: "https://myvault.vault.azure.net/secrets/mycertificate3">,
+ "certificateStoreName": <Example: "TrustedPeople">,
+ "certificateStoreLocation": <Example: "LocalMachine">
+ }
+ ]>
+ },
+ "authenticationSettings": {
+ "msiEndpoint": <Required when the msiClientId property is used. Specifies the MSI endpoint. Example for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
+ "msiClientId": <Required when the VM has any user assigned identities. Specifies the MSI identity. Example: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ }
}
- }
+ }
+}
``` ### [Version-1.0](#tab/version1) ```json
- {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "name": "KeyVaultForWindows",
- "apiVersion": "2022-08-01",
- "location": "<location>",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
- ],
- "properties": {
+{
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "name": "KeyVaultForWindows",
+ "apiVersion": "2022-08-01",
+ "location": "<location>",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
+ ],
+ "properties": {
"publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForWindows", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": true, "settings": {
- "secretsManagementSettings": {
- "pollingIntervalInS": <string specifying polling interval in seconds, e.g: "3600">,
- "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: true>,
- "certificateStoreName": <certificate store name, e.g.: "MY">,
- "certificateStoreLocation": <certificate store location, currently it works locally only e.g.: "LocalMachine">,
- "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"]>
- },
- "authenticationSettings": {
- "msiEndpoint": <Required when msiClientId is provided. MSI endpoint e.g. for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
- "msiClientId": <Required when VM has any user assigned identities. MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619".>
- }
- }
+ "secretsManagementSettings": {
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>,
+ "certificateStoreName": <The certificate store name. Example: "MY">,
+ "certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
+ "observedCertificates": <A string array of KeyVault URIs that represent monitored certificates. Example: ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"]>
+ },
+ "authenticationSettings": {
+ "msiEndpoint": <Required when the msiClientId property is used. Specifies the MSI endpoint. Example for most Azure VMs: "http://169.254.169.254/metadata/identity/oauth2/token">,
+ "msiClientId": <Required when the VM has any user assigned identities. Specifies the MSI identity. Example: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ }
}
- }
+ }
+}
``` -
-### Extension Dependency Ordering
-The Key Vault VM extension supports extension ordering if configured. By default the extension reports that it has successfully started as soon as it has started polling. However, it can be configured to wait until it has successfully downloaded the complete list of certificates before reporting a successful start. If other extensions depend on having the full set of certificates installed before they start, then enabling this setting will allow those extensions to declare a dependency on the Key Vault extension. It will prevent extensions from starting until all certificates they depend on have been installed. The extension will retry the initial download indefinitely and remain in a `Transitioning` state.
+### Extension dependency ordering
-To enable waiting for certificate to be installed, set the following setting:
-```
+You can enable the Key Vault VM extension to support extension dependency ordering. By default, the Key Vault VM extension reports a successful start as soon as polling begins. However, you can configure the extension to report a successful start only after the extension downloads and installs all certificates.
+
+If you use other extensions that require installation of all certificates before they start, you can enable extension dependency ordering in the Key Vault VM extension. This feature allows other extensions to declare a dependency on the Key Vault VM extension.
+
+You can use this feature to prevent other extensions from starting until all dependent certificates are installed. When the feature is enabled, the Key Vault VM extension retries download and install of certificates indefinitely and remains in a **Transitioning** state, until all certificates are successfully installed. After all certificates are present, the Key Vault VM extension reports a successful start.
+
+To enable the extension dependency ordering feature in the Key Vault VM extension, set the `secretsManagementSettings` property:
+
+```json
"secretsManagementSettings": {
- "requireInitialSync": true,
- ...
+ "requireInitialSync": true,
+ ...
} ```
-Refer to [Sequence extension provisioning in Virtual Machine Scale Sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-extension-sequencing.md) on how to set-up dependencies between extensions.
+For more information on how to set up dependencies between extensions, see [Sequence extension provisioning in Virtual Machine Scale Sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-extension-sequencing).
-> [!Note]
-> Using this feature is not compatible with an ARM template that creates a system assigned identity and updates a Key Vault access policy with that identity. Doing so will result in a deadlock as the vault access policy cannot be updated until all extensions have started. You should instead use a *single user assigned MSI identity* and pre-ACL your vaults with that identity before deploying.
+> [!IMPORTANT]
+> The extension dependency ordering feature isn't compatible with an ARM template that creates a system-assigned identity and updates a Key Vault access policy with that identity. If you attempt to use the feature in this scenario, a deadlock occurs because the Key Vault access policy can't update until after all extensions start. Instead, use a _single-user-assigned MSI identity_ and pre-ACL your key vaults with that identity before you deploy.
## Azure PowerShell deployment
-### [Version-3.0](#tab/version3)
+The Azure Key Vault VM extension can be deployed with Azure PowerShell. Save Key Vault VM extension settings to a JSON file (settings.json).
-Save Key Vault VM extension settings to json file.
+The following JSON snippets provide example settings for deploying the Key Vault VM extension with PowerShell.
+
+### [Version-3.0](#tab/version3)
-Example settings (settings.json):
```json
- {
- "secretsManagementSettings": {
- "pollingIntervalInS": "3600",
- "linkOnRenewal": true,
- "observedCertificates": [
- {
- "url": "https://<examplekv>.vault.azure.net/secrets/certificate1",
- "certificateStoreName": "MY",
- "certificateStoreLocation": "LocalMachine",
- "accounts": [
- "Network Service"
- ]
- },
- {
- "url": "https://<examplekv>.vault.azure.net/secrets/certificate2",
- "certificateStoreName": "MY",
- "certificateStoreLocation": "LocalMachine",
- "keyExportable": true,
- "accounts": [
- "Network Service",
- "Local Service"
- ]
- }
- ]
- },
- "authenticationSettings": {
- "msiEndpoint": "http://169.254.169.254/metadata/identity/oauth2/token",
- "msiClientId": "c7373ae5-91c2-4165-8ab6-7381d6e75619"
- }
- }
+{
+ "secretsManagementSettings": {
+ "pollingIntervalInS": "3600",
+ "linkOnRenewal": true,
+ "observedCertificates":
+ [
+ {
+ "url": "https://<examplekv>.vault.azure.net/secrets/certificate1",
+ "certificateStoreName": "MY",
+ "certificateStoreLocation": "LocalMachine",
+ "accounts": [
+ "Network Service"
+ ]
+ },
+ {
+ "url": "https://<examplekv>.vault.azure.net/secrets/certificate2",
+ "certificateStoreName": "MY",
+ "certificateStoreLocation": "LocalMachine",
+ "keyExportable": true,
+ "accounts": [
+ "Network Service",
+ "Local Service"
+ ]
+ }
+ ]},
+ "authenticationSettings": {
+ "msiEndpoint": "http://169.254.169.254/metadata/identity/oauth2/token",
+ "msiClientId": "c7373ae5-91c2-4165-8ab6-7381d6e75619"
+ }
+}
```
+#### Deploy on a VM
+
+```powershell
+# Build settings
+$settings = (get-content -raw ".\settings.json")
+$extName = "KeyVaultForWindows"
+$extPublisher = "Microsoft.Azure.KeyVault"
+$extType = "KeyVaultForWindows"
+
+# Start the deployment
+Set-AzVmExtension -TypeHandlerVersion "3.0" -ResourceGroupName <ResourceGroupName> -Location <Location> -VMName <VMName> -Name $extName -Publisher $extPublisher -Type $extType -SettingString $settings
+```
-The Azure PowerShell can be used to deploy the Key Vault VM extension to an existing virtual machine or Virtual Machine Scale Set.
+#### Deploy on a Virtual Machine Scale Sets instance
-* Deploy the extension on a VM:
-
- ```powershell
- # Build settings
- $settings = (get-content -raw ".\settings.json")
- $extName = "KeyVaultForWindows"
- $extPublisher = "Microsoft.Azure.KeyVault"
- $extType = "KeyVaultForWindows"
-
- # Start the deployment
- Set-AzVmExtension -TypeHandlerVersion "3.0" -ResourceGroupName <ResourceGroupName> -Location <Location> -VMName <VMName> -Name $extName -Publisher $extPublisher -Type $extType -SettingString $settings
-
- ```
+```powershell
+# Build settings
+$settings = ".\settings.json"
+$extName = "KeyVaultForWindows"
+$extPublisher = "Microsoft.Azure.KeyVault"
+$extType = "KeyVaultForWindows"
+
+# Add extension to Virtual Machine Scale Sets
+$vmss = Get-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName>
+Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $extPublisher -Type $extType -TypeHandlerVersion "3.0" -Setting $settings
-* Deploy the extension on a Virtual Machine Scale Set:
+# Start the deployment
+Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
+```
- ```powershell
-
- # Build settings
- $settings = ".\settings.json"
- $extName = "KeyVaultForWindows"
- $extPublisher = "Microsoft.Azure.KeyVault"
- $extType = "KeyVaultForWindows"
-
- # Add Extension to VMSS
- $vmss = Get-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName>
- Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $extPublisher -Type $extType -TypeHandlerVersion "3.0" -Setting $settings
-
- # Start the deployment
- Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
-
- ```
### [Version-1.0](#tab/version1)
+Use PowerShell to deploy the version 1.0 Key Vault VM extension to an existing VM or Virtual Machine Scale Sets instance.
+ > [!WARNING]
-> PowerShell clients often add `\` to `"` in the settings.json, which causes akvvm_service to fail with the error `[CertificateManagementConfiguration] Failed to parse the configuration settings with:not an object.`
-> The extra `\` and `"` characters will be visible in the portal, in **Extensions** under **Settings**. To avoid this, initialize `$settings` as a PowerShell `HashTable`:
->
-> ```powershell
-> $settings = @{"secretsManagementSettings" = @{"pollingIntervalInS"="<pollingInterval>"; "certificateStoreName"="<certStoreName>";"certificateStoreLocation"="<certStoreLoc>";"observedCertificates"=@("<observedCert1>", "<observedCert2>")};"authenticationSettings"=@{"msiEndpoint"="<msiEndpoint>";"msiClientId"="<msiClientId>"} }
-> ```
-
-The Azure PowerShell can be used to deploy the Key Vault VM extension to an existing virtual machine or Virtual Machine Scale Set.
+> PowerShell clients often prefix a quote mark `"` with a backslash `\` in the settings JSON file. The extraneous characters cause the akvvm_service to fail with the error, "[CertificateManagementConfiguration] Failed to parse the configuration settings with:not an object."
+>
+> You can see the supplied backslash `\` and quote `"` characters in the Azure portal under **Settings** > **Extensions + Applications**. To avoid the error, initialize the `$settings` property as a PowerShell `Hashtable`. Avoid using extra quote mark `"` characters, and ensure the variable types match. For more information, see [Everything you wanted to know about hashtables](/powershell/scripting/learn/deep-dives/everything-about-hashtable).
+>
-* Deploy the extension on a VM:
-
- ```powershell
- # Build settings
- $settings = '{"secretsManagementSettings":
- { "pollingIntervalInS": "' + <pollingInterval> +
- '", "certificateStoreName": "' + <certStoreName> +
- '", "certificateStoreLocation": "' + <certStoreLoc> +
- '", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] },
- "authenticationSettings":
- { "msiEndpoint": "' + <msiEndpoint> +
- '", "msiClientId" :"' + <msiClientId> + '"}}'
- $extName = "KeyVaultForWindows"
- $extPublisher = "Microsoft.Azure.KeyVault"
- $extType = "KeyVaultForWindows"
-
- # Start the deployment
- Set-AzVmExtension -TypeHandlerVersion "1.0" -ResourceGroupName <ResourceGroupName> -Location <Location> -VMName <VMName> -Name $extName -Publisher $extPublisher -Type $extType -SettingString $settings
-
- ```
+#### Deploy on a VM
-* Deploy the extension on a Virtual Machine Scale Set:
+```powershell
+# Build settings
+$settings = '{"secretsManagementSettings":
+{ "pollingIntervalInS": "' + <pollingInterval> +
+'", "certificateStoreName": "' + <certStoreName> +
+'", "certificateStoreLocation": "' + <certStoreLoc> +
+'", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] },
+"authenticationSettings":
+{ "msiEndpoint": "' + <msiEndpoint> +
+'", "msiClientId" :"' + <msiClientId> + '"}}'
+$extName = "KeyVaultForWindows"
+$extPublisher = "Microsoft.Azure.KeyVault"
+$extType = "KeyVaultForWindows"
+
+# Start the deployment
+Set-AzVmExtension -TypeHandlerVersion "1.0" -ResourceGroupName <ResourceGroupName> -Location <Location> -VMName <VMName> -Name $extName -Publisher $extPublisher -Type $extType -SettingString $settings
+```
- ```powershell
-
- # Build settings
- $settings = '{"secretsManagementSettings":
- { "pollingIntervalInS": "' + <pollingInterval> +
- '", "certificateStoreName": "' + <certStoreName> +
- '", "certificateStoreLocation": "' + <certStoreLoc> +
- '", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] } },
- "authenticationSettings":
- { "msiEndpoint": "' + <msiEndpoint> +
- '", "msiClientId" :"' + <msiClientId> + '"}}'
- $extName = "KeyVaultForWindows"
- $extPublisher = "Microsoft.Azure.KeyVault"
- $extType = "KeyVaultForWindows"
-
- # Add Extension to VMSS
- $vmss = Get-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName>
- Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $extPublisher -Type $extType -TypeHandlerVersion "1.0" -Setting $settings
-
- # Start the deployment
- Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
-
- ```
+#### Deploy on a Virtual Machine Scale Sets instance
+
+```powershell
+# Build settings
+$settings = '{"secretsManagementSettings":
+{ "pollingIntervalInS": "' + <pollingInterval> +
+'", "certificateStoreName": "' + <certStoreName> +
+'", "certificateStoreLocation": "' + <certStoreLoc> +
+'", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] } },
+"authenticationSettings":
+{ "msiEndpoint": "' + <msiEndpoint> +
+'", "msiClientId" :"' + <msiClientId> + '"}}'
+$extName = "KeyVaultForWindows"
+$extPublisher = "Microsoft.Azure.KeyVault"
+$extType = "KeyVaultForWindows"
+
+# Add extension to Virtual Machine Scale Sets
+$vmss = Get-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName>
+Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $extPublisher -Type $extType -TypeHandlerVersion "1.0" -Setting $settings
+# Start the deployment
+Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
+```
## Azure CLI deployment
-### [Version-3.0](#tab/version3)
+The Azure Key Vault VM extension can be deployed by using the Azure CLI. Save Key Vault VM extension settings to a JSON file (settings.json).
-Save Key Vault VM extension settings to json file.
+The following JSON snippets provide example settings for deploying the Key Vault VM extension with the Azure CLI.
+
+### [Version-3.0](#tab/version3)
-Example settings (settings.json):
```json { "secretsManagementSettings": {
Example settings (settings.json):
} } ```-
-The Azure CLI can be used to deploy the Key Vault VM extension to an existing virtual machine or Virtual Machine Scale Set.
-* Deploy the extension on a VM:
+#### Deploy on a VM
- ```azurecli
- # Start the deployment
- az vm extension set --name "KeyVaultForWindows" `
- --publisher Microsoft.Azure.KeyVault `
- --resource-group "<resourcegroup>" `
- --vm-name "<vmName>" `
- --settings "@settings.json"
- ```
-
-* Deploy the extension on a Virtual Machine Scale Set:
+```azurecli
+# Start the deployment
+az vm extension set --name "KeyVaultForWindows" `
+ --publisher Microsoft.Azure.KeyVault `
+ --resource-group "<resourcegroup>" `
+ --vm-name "<vmName>" `
+ --settings "@settings.json"
+```
- ```azurecli
- # Start the deployment
- az vmss extension set --name "KeyVaultForWindows" `
- --publisher Microsoft.Azure.KeyVault `
- --resource-group "<resourcegroup>" `
- --vmss-name "<vmName>" `
- --settings "@settings.json"
+#### Deploy on a Virtual Machine Scale Sets instance
+```
+# Start the deployment
+az vmss extension set --name "KeyVaultForWindows" `
+ --publisher Microsoft.Azure.KeyVault `
+ --resource-group "<resourcegroup>" `
+ --vmss-name "<vmssName>" `
+ --settings "@settings.json"
+```
### [Version-1.0](#tab/version1)
-The Azure CLI can be used to deploy the Key Vault VM extension to an existing virtual machine or Virtual Machine Scale Set.
+Use the Azure CLI to deploy the version 1.0 Key Vault VM extension to an existing VM or Virtual Machine Scale Sets instance.
-* Deploy the extension on a VM:
+#### Deploy on a VM
- ```azurecli
- # Start the deployment
- az vm extension set --name "KeyVaultForWindows" `
- --publisher Microsoft.Azure.KeyVault `
- --resource-group "<resourcegroup>" `
- --vm-name "<vmName>" `
- --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] },
- \"authenticationSettings\": { \"msiEndpoint\": \"<msiEndpoint>\", \"msiClientId\": \"<msiClientId>\"}}'
- ```
-
-* Deploy the extension on a Virtual Machine Scale Set:
+```azurecli
+# Start the deployment
+az vm extension set --name "KeyVaultForWindows" `
+ --publisher Microsoft.Azure.KeyVault `
+ --resource-group "<resourcegroup>" `
+ --vm-name "<vmName>" `
+ --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] }, \"authenticationSettings\": { \"msiEndpoint\": \"<msiEndpoint>\", \"msiClientId\": \"<msiClientId>\"}}'
+```
+
+#### Deploy on a Virtual Machine Scale Sets instance
+
+```azurecli
+# Start the deployment
+az vmss extension set --name "KeyVaultForWindows" `
+ --publisher Microsoft.Azure.KeyVault `
+ --resource-group "<resourcegroup>" `
+ --vmss-name "<vmName>" `
+ --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] }, \"authenticationSettings\": { \"msiEndpoint\": \"<msiEndpoint>\", \"msiClientId\": \"<msiClientId>\"}}'
+```
- ```azurecli
- # Start the deployment
- az vmss extension set --name "KeyVaultForWindows" `
- --publisher Microsoft.Azure.KeyVault `
- --resource-group "<resourcegroup>" `
- --vmss-name "<vmName>" `
- --settings '{\"secretsManagementSettings\": { \"pollingIntervalInS\": \"<pollingInterval>\", \"certificateStoreName\": \"<certStoreName>\", \"certificateStoreLocation\": \"<certStoreLoc>\", \"observedCertificates\": [\" <observedCert1> \", \" <observedCert2> \"] },
- \"authenticationSettings\": { \"msiEndpoint\": \"<msiEndpoint>\", \"msiClientId\": \"<msiClientId>\"}}'
-## Troubleshoot and support
+## <a name="troubleshoot-and-support"></a> Troubleshoot issues
-### Frequently Asked Questions
+Here are some suggestions for how to troubleshoot deployment issues.
-#### Is there a limit on the number of observedCertificates you can set up?
-No, Key Vault VM Extension doesnΓÇÖt have limit on the number of observedCertificates.
-#### What will be the default permission if no account is provided in settings?
-Administrators and SYSTEM will get Full Control by default.
-#### How do you determine if a certificate key is going to be CAPI1 or CNG?
-We rely on the default behavior of [PFXImportCertStore API](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore). By default, if a certificate has Provider Name attribute that matches with CAPI1, certificate will be imported using CAPI1 certificate, else it will be imported using CNG APIs.
+### Check frequently asked questions
-### Troubleshoot
+#### Is there a limit on the number of observed certificates?
-Data about the state of extension deployments can be retrieved from the Azure portal, and by using the Azure PowerShell. To see the deployment state of extensions for a given VM, run the following command using the Azure PowerShell.
+No. The Key Vault VM extension doesn't limit the number of observed certificates (`observedCertificates`).
-**Azure PowerShell**
-```powershell
-Get-AzVMExtension -VMName <vmName> -ResourceGroupname <resource group name>
-```
+#### What's the default permission when no account is specified?
-**Azure CLI**
-```azurecli
- az vm get-instance-view --resource-group <resource group name> --name <vmName> --query "instanceView.extensions"
-```
+By default, Administrators and SYSTEM receive Full Control.
+
+#### How do you determine if a certificate key is CAPI1 or CNG?
+
+The extension relies on the default behavior of the [PFXImportCertStore API](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore). By default, if a certificate has a Provider Name attribute that matches with CAPI1, then the certificate is imported by using CAPI1 APIs. Otherwise, the certificate is imported by using CNG APIs.
+
+#### Does the extension support IIS certificate autobinding?
+
+No. The Azure Key Vault VM extension doesn't support IIS automatic rebinding. The automatic rebinding process requires certificate services lifecycle notifications, and the extension doesn't write a certificate-renewal event (event ID 1001) upon newer versions.
+
+The recommended approach is to use the Key Vault VM extension schema's `linkOnRenewal` property. Upon installation, when the `linkOnRenewal` property is set to `true`, the previous version of a certificate is chained to its successor via the `CERT_RENEWAL_PROP_ID` certificate extension property. The chaining enables the S-channel to pick up the most recent (latest) valid certificate with a matching SAN. This feature enables autorotation of SSL certificates without necessitating a redeployment or binding.
+
+### View extension status
+
+Check the status of your extension deployment in the Azure portal, or by using PowerShell or the Azure CLI.
+
+To see the deployment state of extensions for a given VM, run the following commands.
+
+- Azure PowerShell:
+
+ ```powershell
+ Get-AzVMExtension -ResourceGroupName <myResourceGroup> -VMName <myVM> -Name <myExtensionName>
+ ```
+
+- The Azure CLI:
+
+ ```azurecli
+ az vm get-instance-view --resource-group <myResourceGroup> --name <myVM> --query "instanceView.extensions"
+ ```
+
+### Review logs and configuration
+
+The Key Vault VM extension logs exist only locally on the VM. Review the log details to help with troubleshooting.
+
+| Log file | Description |
+| | |
+| C:\WindowsAzure\Logs\WaAppAgent.log` | Shows when updates occur to the extension. |
+| C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<_most recent version_>\ | Shows the status of certificate download. The download location is always the Windows computer's MY store (certlm.msc). |
+| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<_most recent version_>\RuntimeSettings\ | The Key Vault VM Extension service logs show the status of the akvvm_service service. |
+| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\<_most recent version_>\Status\ | The configuration and binaries for the Key Vault VM Extension service. |
-#### Logs and configuration
-The Key Vault VM extension logs only exist locally on the VM and are most informative when it comes to troubleshooting
+### Get support
-|Location|Description|
-|--|--|
-| C:\WindowsAzure\Logs\WaAppAgent.log | Shows when an update to the extension occurred. |
-| C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\ | Shows the status of certificate download. The download location will always be the Windows computer's MY store (certlm.msc). |
-| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\RuntimeSettings\ | The Key Vault VM Extension service logs show the status of the akvvm_service service. |
-| C:\Packages\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\\\<most recent version\>\Status\ | The configuration and binaries for Key Vault VM Extension service. |
-|||
+Here are some other options to help you resolve deployment issues:
+- For assistance, contact the Azure experts on the [Q&A and Stack Overflow forums](https://azure.microsoft.com/support/community/).
-### Support
+- If you don't find an answer on the site, you can post a question for input from Microsoft or other members of the community.
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
Previously updated : 12/10/2021 Last updated : 04/11/2023
Specify either $true or $false to control support of Azure Write Accelerator wit
Examples of commands could look like:
-```powershell
+```azurepowershell-interactive
New-AzVMConfig | Set-AzVMOsDisk | Add-AzVMDataDisk -Name "datadisk1" | Add-AzVMDataDisk -Name "logdisk1" -WriteAccelerator | New-AzVM Get-AzVM | Update-AzVM -OsDiskWriteAccelerator $true
You can use this script to add a new disk to your VM. The disk created with this
Replace `myVM`, `myWAVMs`, `log001`, size of the disk, and LunID of the disk with values appropriate for your specific deployment.
-```powershell
+```azurepowershell-interactive
# Specify your VM Name $vmName="myVM" #Specify your Resource Group
Update-AzVM -ResourceGroupName $rgname -VM $vm
You can use this script to enable Write Accelerator on an existing disk. Replace `myVM`, `myWAVMs`, and `test-log001` with values appropriate for your specific deployment. The script adds Write Accelerator to an existing disk where the value for **$newstatus** is set to '$true'. Using the value '$false' will disable Write Accelerator on a given disk.
-```powershell
+```azurepowershell-interactive
#Specify your VM Name $vmName="myVM" #Specify your Resource Group
Replace the terms within '<< >>' with your data, including the file name the J
The output could look like:
-```JSON
+```output
{ "properties": { "vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
Then update the existing deployment with this command: `armclient PUT /subscript
The output should look like the one below. You can see that Write Accelerator enabled for one disk.
-```JSON
+```output
{ "properties": { "vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
Previously updated : 02/22/2023 Last updated : 04/11/2023
Here's sample code to retrieve all metadata for an instance. To access a specifi
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | ConvertTo-Json -Depth 64 ```
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http:
#### [Linux](#tab/linux/) - ```bash curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq ```
The `jq` utility is available in many cases, but not all. If the `jq` utility is
[!INCLUDE [imds-full-instance-response](./includes/imds-full-instance-response.md)] - ## Security and authentication The Instance Metadata Service is only accessible from within a running virtual machine instance on a non-routable IP address. VMs can only interact with their own metadata/functionality. The API is HTTP only and never leaves the host. In order to ensure that requests are directly intended for IMDS and prevent unintended or unwanted redirection of requests, requests:+ - **Must** contain the header `Metadata: true` - Must **not** contain an `X-Forwarded-For` header
Any request that doesn't meet **both** of these requirements are rejected by the
If it isn't necessary for every process on the VM to access IMDS endpoint, you can set local firewall rules to limit the access. For example, if only a known system service needs to access instance metadata service, you can set a firewall rule on IMDS endpoint, only allowing the specific process(es) to access, or denying access for the rest of the processes. - ## Proxies IMDS is **not** intended to be used behind a proxy and doing so is unsupported. Most HTTP clients provide an option for you to disable proxies on your requests, and this functionality must be utilized when communicating with IMDS. Consult your client's documentation for details.
Endpoints may support required and/or optional parameters. See [Schema](#schema)
IMDS endpoints support HTTP query string parameters. For example:
-```
+```URL
http://169.254.169.254/metadata/instance/compute?api-version=2021-01-01&format=json ```
Requests with duplicate query parameter names will be rejected.
For some endpoints that return larger json blobs, we support appending route parameters to the request endpoint to filter down to a subset of the response:
-```
+```URL
http://169.254.169.254/metadata/<endpoint>/[<filter parameter>/...]?<query parameters> ```+ The parameters correspond to the indexes/keys that would be used to walk down the json object were you interacting with a parsed representation. For example, `/metadata/instance` returns the json object:+ ```json { "compute": { ... },
For example, `/metadata/instance` returns the json object:
} ```
-If we want to filter the response down to just the compute property, we would send the request:
-```
+If we want to filter the response down to just the compute property, we would send the request:
+
+```URL
http://169.254.169.254/metadata/instance/compute?api-version=<version> ```
-Similarly, if we want to filter to a nested property or specific array element we keep appending keys:
-```
+Similarly, if we want to filter to a nested property or specific array element we keep appending keys:
+
+```URL
http://169.254.169.254/metadata/instance/network/interface/0?api-version=<version> ```+ would filter to the first element from the `Network.interface` property and return: ```json
To access a non-default response format, specify the requested format as a query
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance?api-version=2017-08-01&format=text" ```
When you don't specify a version, you get an error with a list of the newest sup
``` #### Supported API versions+ - 2021-12-13 - 2021-11-15 - 2021-11-01
When you don't specify a version, you get an error with a list of the newest sup
- 2018-10-01 - 2018-04-02 - 2018-02-01-- 2017-12-01
+- 2017-12-01
- 2017-10-01-- 2017-08-01
+- 2017-08-01
- 2017-04-02 - 2017-03-01
None (this endpoint is unversioned).
### Get VM metadata
-Exposes the important metadata for the VM instance, including compute, network, and storage.
+Exposes the important metadata for the VM instance, including compute, network, and storage.
``` GET /metadata/instance
This endpoint supports response filtering via [route parameters](#route-paramete
[!INCLUDE [imds-full-instance-response](./includes/imds-full-instance-response.md)] - Schema breakdown: **Compute**
Data | Description | Version introduced |
| `keyEncryptionKey.sourceVault.id` | The location of the key encryption key | 2021-11-01 | `keyEncryptionKey.keyUrl` | The location of the key | 2021-11-01 - The resource disk object contains the size of the [Local Temp Disk](managed-disks-overview.md#temporary-disk) attached to the VM, if it has one, in kilobytes. If there's [no local temp disk for the VM](azure-vms-no-temp-disk.yml), this value is 0.
To set up user data, utilize the quickstart template [here](https://aka.ms/ImdsU
> [!NOTE] > Security notice: IMDS is open to all applications on the VM, sensitive data should not be placed in the user data. - #### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
$userData = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text" [System.Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($userData)) ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
- #### Sample 1: Tracking VM running on Azure As a service provider, you may require to track the number of VMs running your software or have agents that need to track uniqueness of the VM. To be able to get a unique ID for a VM, use the `vmId` field from Instance Metadata Service.
As a service provider, you may require to track the number of VMs running your s
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-08-01&format=text" ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
**Response**
-```
+```output
5c08b38e-4d57-4c23-ac45-aca61037f084 ```
You can query this data directly via IMDS.
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/platformFaultDomain?api-version=2017-08-01&format=text" ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
**Response**
-```
+```output
0 ```
Tags may have been applied to your Azure VM to logically organize them into a ta
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/tags?api-version=2017-08-01&format=text" ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
**Response**
-```
+```output
Department:IT;ReferenceNumber:123456;TestStatus:Pending ```
The `tags` field is a string with the tags delimited by semicolons. This output
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/tagsList?api-version=2019-06-04" | ConvertTo-Json -Depth 64 ```
The `jq` utility is available in many cases, but not all. If the `jq` utility is
- #### Sample 4: Get more information about the VM during support case As a service provider, you may get a support call where you would like to know more information about the VM. Asking the customer to share the compute metadata can provide basic information for the support professional to know about the kind of VM on Azure.
As a service provider, you may get a support call where you would like to know m
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute?api-version=2020-09-01" | ConvertTo-Json -Depth 64 ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
> The response is a JSON string. The following example response is pretty-printed for readability. #### [Windows](#tab/windows/)+ ```json { "azEnvironment": "AZUREPUBLICCLOUD",
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
``` #### [Linux](#tab/linux/)+ ```json { "azEnvironment": "AZUREPUBLICCLOUD",
Azure has various sovereign clouds like [Azure Government](https://azure.microso
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2018-10-01&format=text" ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
**Response**
-```
+```output
AzurePublicCloud ```
The cloud and the values of the Azure environment are listed here.
| [Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | AzureChinaCloud | [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | AzureGermanCloud - #### Sample 6: Retrieve network information **Request** #### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/network?api-version=2017-08-01" | ConvertTo-Json -Depth 64 ```
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/ne
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text" ```
The decoded document contains the following fields:
> For Classic (non-Azure Resource Manager) VMs, only the vmId is guaranteed to be populated. Example document:+ ```json { "nonce":"20201130-211924",
Vendors in Azure Marketplace want to ensure that their software is licensed to r
#### [Windows](#tab/windows/)
-```powershell
+```azurepowershell-interactive
# Get the signature $attestedDoc = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri http://169.254.169.254/metadata/attested/document?api-version=2020-09-01 # Decode the signature
$signature = [System.Convert]::FromBase64String($attestedDoc.signature)
Verify that the signature is from Microsoft Azure and checks the certificate chain for errors.
-```powershell
+```azurepowershell-interactive
# Get certificate chain $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]($signature) $chain = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Chain
You can then request tokens for managed identities from IMDS. Use these tokens t
For detailed steps to enable this feature, see [Acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md). ## Load Balancer Metadata+ When you place virtual machine or virtual machine set instances behind an Azure Standard Load Balancer, you can use IMDS to retrieve metadata related to the load balancer and the instances. For more information, see [Retrieve load balancer information](../load-balancer/instance-metadata-service-load-balancer.md). ## Scheduled events
-You can obtain the status of the scheduled events by using IMDS. Then the user can specify a set of actions to run upon these events. For more information, see [Scheduled events for Linux](./linux/scheduled-events.md) or [Scheduled events for Windows](./windows/scheduled-events.md).
+You can obtain the status of the scheduled events by using IMDS. Then the user can specify a set of actions to run upon these events. For more information, see [Scheduled events for Linux](./linux/scheduled-events.md) or [Scheduled events for Windows](./windows/scheduled-events.md).
## Sample code in different languages
If there's a data element not found or a malformed request, the Instance Metadat
## Frequently asked questions -- I'm getting the error `400 Bad Request, Required metadata header not specified`. What does this mean?
+- **I'm getting the error `400 Bad Request, Required metadata header not specified`. What does this mean?**
- IMDS requires the header `Metadata: true` to be passed in the request. Passing this header in the REST call allows access to IMDS. -- Why am I not getting compute information for my VM?
+- **Why am I not getting compute information for my VM?**
- Currently, IMDS only supports instances created with Azure Resource Manager. -- I created my VM through Azure Resource Manager some time ago. Why am I not seeing compute metadata information?
+- **I created my VM through Azure Resource Manager some time ago. Why am I not seeing compute metadata information?**
- If you created your VM after September 2016, add a [tag](../azure-resource-manager/management/tag-resources.md) to start seeing compute metadata. If you created your VM before September 2016, add or remove extensions or data disks to the VM instance to refresh metadata. -- Is user data the same as custom data?
+- **Is user data the same as custom data?**
- User data offers the similar functionality to custom data, allowing you to pass your own metadata to the VM instance. The difference is, user data is retrieved through IMDS, and is persistent throughout the lifetime of the VM instance. Existing custom data feature will continue to work as described in [this article](custom-data.md). However you can only get custom data through local system folder, not through IMDS. -- Why am I not seeing all data populated for a new version?
+- **Why am I not seeing all data populated for a new version?**
- If you created your VM after September 2016, add a [tag](../azure-resource-manager/management/tag-resources.md) to start seeing compute metadata. If you created your VM before September 2016, add or remove extensions or data disks to the VM instance to refresh metadata. -- Why am I getting the error `500 Internal Server Error` or `410 Resource Gone`?
+- **Why am I getting the error `500 Internal Server Error` or `410 Resource Gone`?**
- Retry your request. For more information, see [Transient fault handling](/azure/architecture/best-practices/transient-faults). If the problem persists, create a support issue in the Azure portal for the VM. -- Would this work for scale set instances?
+- **Would this work for scale set instances?**
- Yes, IMDS is available for scale set instances. -- I updated my tags in my scale sets, but they don't appear in the instances (unlike single instance VMs). Am I doing something wrong?
+- **I updated my tags in my scale sets, but they don't appear in the instances (unlike single instance VMs). Am I doing something wrong?**
- Currently tags for scale sets only show to the VM on a reboot, reimage, or disk change to the instance. -- Why am I'm not seeing the SKU information for my VM in `instance/compute` details?
+- **Why am I'm not seeing the SKU information for my VM in `instance/compute` details?**
- For custom images created from Azure Marketplace, Azure platform doesn't retain the SKU information for the custom image and the details for any VMs created from the custom image. This is by design and hence not surfaced in the VM `instance/compute` details. -- Why is my request timed out for my call to the service?
+- **Why is my request timed out for my call to the service?**
- Metadata calls must be made from the primary IP address assigned to the primary network card of the VM. Additionally, if you've changed your routes, there must be a route for the 169.254.169.254/32 address in your VM's local routing table. ### [Windows](#tab/windows/) 1. Dump your local routing table and look for the IMDS entry. For example:+ ```console
- > route print
+ route print
+ ```
+
+ ```output
IPv4 Route Table =========================================================================== Active Routes:
If there's a data element not found or a malformed request, the Instance Metadat
169.254.169.254 255.255.255.255 172.16.69.1 172.16.69.7 11 ... (continues) ... ```+ 1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (for example, `172.16.69.7`). 1. Dump the interface configuration and find the interface that corresponds to the one referenced in the routing table, noting the MAC (physical) address.+ ```console
- > ipconfig /all
+ ipconfig /all
+ ```
+
+ ```output
... (continues) ... Ethernet adapter Ethernet:
If there's a data element not found or a malformed request, the Instance Metadat
Subnet Mask . . . . . . . . . . . : 255.255.255.0 ... (continues) ... ```+ 1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
- ```powershell
+
+ ```azurepowershell-interactive
$ResourceGroup = '<Resource_Group>' $VmName = '<VM_Name>' $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json | Foreach-Object { $_.id.Split('/')[-1] }
If there's a data element not found or a malformed request, the Instance Metadat
$Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName | ConvertFrom-Json Write-Host $NicName, $Nic.primary, $Nic.macAddress }
- # Output: wintest767 True 00-0D-3A-E5-1C-C0
```+
+ ```output
+ wintest767 True 00-0D-3A-E5-1C-C0
+ ```
+ 1. If they don't match, update the routing table so that the primary NIC and IP are targeted. ### [Linux](#tab/linux/) 1. Dump your local routing table with a command such as `netstat -r` and look for the IMDS entry (e.g.):
- ```console
- ~$ netstat -r
+
+ ```bash
+ netstat -r
+ ```
+
+ ```output
Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default _gateway 0.0.0.0 UG 0 0 0 eth0
If there's a data element not found or a malformed request, the Instance Metadat
169.254.169.254 _gateway 255.255.255.255 UGH 0 0 0 eth0 172.16.69.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 ```
- 1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (e.g. `eth0`).
- 1. Dump the interface configuration for the corresponding interface in the routing table (note the exact name of the configuration file may vary)
- ```console
- ~$ cat /etc/netplan/50-cloud-init.yaml
+
+ 2. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (e.g. `eth0`).
+ 3. Dump the interface configuration for the corresponding interface in the routing table (note the exact name of the configuration file may vary)
+
+ ```bash
+ cat /etc/netplan/50-cloud-init.yaml
+ ```
+
+ ```output
network: ethernets: eth0:
If there's a data element not found or a malformed request, the Instance Metadat
set-name: eth0 version: 2 ```
- 1. If you're using a dynamic IP, note the MAC address. If you're using a static IP, you may note the listed IP(s) and/or the MAC address.
- 1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
- ```powershell
+
+ 4. If you're using a dynamic IP, note the MAC address. If you're using a static IP, you may note the listed IP(s) and/or the MAC address.
+ 5. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
+
+ ```azurepowershell-interactive
$ResourceGroup = '<Resource_Group>' $VmName = '<VM_Name>' $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json | Foreach-Object { $_.id.Split('/')[-1] }
If there's a data element not found or a malformed request, the Instance Metadat
$Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName | ConvertFrom-Json Write-Host $NicName, $Nic.primary, $Nic.macAddress }
- # Output: ipexample606 True 00-0D-3A-E4-C7-2E
```
- 1. If they don't match, update the routing table such that the primary NIC/IP are targeted.
+
+ ```output
+ ipexample606 True 00-0D-3A-E4-C7-2E
+ ```
+
+ 6. If they don't match, update the routing table such that the primary NIC/IP are targeted.
-- Fail over clustering in Windows Server
+- **Fail over clustering in Windows Server**
- When you're querying IMDS with failover clustering, it's sometimes necessary to add a route to the routing table. Here's how: 1. Open a command prompt with administrator privileges.
If there's a data element not found or a malformed request, the Instance Metadat
> [!NOTE] > The following example output is from a Windows Server VM with failover cluster enabled. For simplicity, the output contains only the IPv4 Route Table.
- ```
+ ```output
IPv4 Route Table =========================================================================== Active Routes:
You can provide product feedback and ideas to our user feedback channel under Vi
- [Scheduled events for Linux](./linux/scheduled-events.md) - [Scheduled events for Windows](./windows/scheduled-events.md)-
virtual-machines Image Builder Devops Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-devops-task.md
description: In this article, you use an Azure DevOps task to inject build artif
Previously updated : 01/27/2021 Last updated : 04/11/2023
In this article, you learn how to use an Azure DevOps task to inject build artif
At this time, there are two Azure VM Image Builder DevOps tasks:
-* [*Stable* VM Image Builder task](https://marketplace.visualstudio.com/items?itemName=AzureImageBuilder.devOps-task-for-azure-image-builder): The latest stable build that's been tested, and reports no [General Data Protection Regulation (GDPR)](https://www.microsoft.com/trust-center/privacy/gdpr-overview) issues.
+* [*Stable* VM Image Builder task](https://marketplace.visualstudio.com/items?itemName=AzureImageBuilder.devOps-task-for-azure-image-builder): The latest stable build that's been tested, and reports no [General Data Protection Regulation (GDPR)](https://www.microsoft.com/trust-center/privacy/gdpr-overview) issues.
-
-* [*Unstable* VM Image Builder task](https://marketplace.visualstudio.com/items?itemName=AzureImageBuilder.devOps-task-for-azure-image-builder-canary): We offer a so-called *unstable* task so that you can test the latest updates and features before we release the task code as *stable*. After about a week, if there are no customer-reported or telemetry issues, we promote the task code to *stable*.
+* [*Unstable* VM Image Builder task](https://marketplace.visualstudio.com/items?itemName=AzureImageBuilder.devOps-task-for-azure-image-builder-canary): We offer a so-called *unstable* task so that you can test the latest updates and features before we release the task code as *stable*. After about a week, if there are no customer-reported or telemetry issues, we promote the task code to *stable*.
## Prerequisites
Before you begin, you must:
* Have an Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) account, and a Build Pipeline created. * Register and enable the VM Image Builder feature requirements in the subscription that's used by the pipelines:
- * [Azure PowerShell](../windows/image-builder-powershell.md#register-features)
- * [The Azure CLI](../windows/image-builder.md#register-the-features)
-
+ * [Azure PowerShell](../windows/image-builder-powershell.md#register-features)
+ * [The Azure CLI](../windows/image-builder.md#register-the-features)
+ * Create a standard Azure storage account in the source image resource group. You can use other resource groups or storage accounts. The storage account is used transfer the build artifacts from the DevOps task to the image.
- ```powerShell
+ ```azurepowershell-interactive
# Azure PowerShell $timeInt=$(get-date -UFormat "%s") $storageAccName="aibstorage"+$timeInt
Before you begin, you must:
New-AzStorageAccount -ResourceGroupName $strResourceGroup -Name $storageAccName -Location $location -SkuName Standard_LRS ```
- ```azurecli
+ ```azurecli-interactive
# The Azure CLI location=westus scriptStorageAcc=aibstordot$(date +'%s')
In the dropdown list, select the subscription that you want VM Image Builder to
### Resource group Use the resource group where the temporary image template artifact will be stored. When you create a template artifact, another temporary VM Image Builder resource group, `IT_<DestinationResourceGroup>_<TemplateName>_guid`, is created. The temporary resource group stores the image metadata, such as scripts. At the end of the task, the image template artifact and temporary VM Image Builder resource group is deleted.
-
+ ### Location The location is the region where VM Image Builder will run. Only a set number of [regions](../image-builder-overview.md#regions) are supported. The source images must be present in this location. For example, if you're using Azure Compute Gallery (formerly Shared Image Gallery), a replica must exist in that region. ### Managed identity (required)+ VM Image Builder requires a managed identity, which it uses to read source custom images, connect to Azure Storage, and create custom images. For more information, see [Learn about VM Image Builder](../image-builder-overview.md#permissions). ### Virtual network support
You can configure the created VM to be in a specific virtual network. When you c
The source images must be of the supported VM Image Builder operating systems. You can choose existing custom images in the same region that VM Image Builder is running from: * Managed Image: Pass in the resource ID. For example:+ ```json /subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/images/<imageName> ``` * Compute Gallery: Pass in the resource ID of the image version. For example:+ ```json /subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$sigName/images/$imageDefName/versions/<versionNumber> ```
The task runs the following Windows Update configuration:
"exclude:$_.Title -like '*Preview*'", "include:$true" ```+ The task installs important and recommended Windows Updates that aren't *preview* versions. #### Handling reboots
Select the **Build Path** button to choose the build folder that you want to be
> [!IMPORTANT] > When you're adding a repo artifact, you might find that the directory name is prefixed with an underscore character (_). The underscore can cause issues with the inline commands. Be sure to use the appropriate quotation marks in the commands.
->
+>
The following example explains how this works:
The following example explains how this works:
* For Windows: Files exist in the *C:* drive. A directory named *buildArtifacts* is created, which includes the *webapp* directory.
-* For Linux: Files exist in the */tmp* directory. The *webapp* directory is created, which includes all the files and directories. Because this is a temporary directory, you must move the files out of it. Otherwise, they'll be deleted.
+* For Linux: Files exist in the `/tmp` directory. The `webapp` directory is created, which includes all the files and directories. Because this is a temporary directory, you must move the files out of it. Otherwise, they'll be deleted.
#### Inline customization script * For Windows: You can enter PowerShell inline commands, separated by commas. If you want to run a script in your build directory, you can use:
- ```PowerShell
+ ```azurepowershell-interactive
& 'c:\buildArtifacts\webapp\webconfig.ps1' ``` You can reference multiple scripts or add more commands. For example:
- ```PowerShell
+ ```azurepowershell-interactive
& 'c:\buildArtifacts\webapp\webconfig.ps1' & 'c:\buildArtifacts\webapp\installAgent.ps1' ```+ * For Linux: The build artifacts are put into the */tmp* directory. However, on many Linux operating systems, on a reboot, the */tmp* directory contents are deleted. If you want the artifacts to exist in the image, you must create another directory and copy them over. For example: ```bash sudo mkdir /lib/buildArtifacts sudo cp -r "/tmp/_ImageBuilding/webapp" /lib/buildArtifacts/. ```
-
+ If you're OK with using the */tmp* directory, you can run the script by using the following code:
-
+ ```bash # Grant execute permissions to run scripts sudo chmod +x "/tmp/_ImageBuilding/webapp/coreConfig.sh" echo "running script" sudo . "/tmp/AppsAndImageBuilderLinux/_WebApp/coreConfig.sh" ```
-
+ #### What happens to the build artifacts after the image build? > [!NOTE] > VM Image Builder doesn't automatically remove the build artifacts. We strongly suggest that you always use code to remove the build artifacts.
->
+>
* For Windows: VM Image Builder deploys files to the *C:\buildArtifacts* directory. Because the directory is persisted, you must remove it by running a script. For example:
- ```PowerShell
+ ```azurepowershell-interactive
# Clean up buildArtifacts directory Remove-Item -Path "C:\buildArtifacts\*" -Force -Recurse # Delete the buildArtifacts directory Remove-Item -Path "C:\buildArtifacts" -Force ```
-
+ * For Linux: The build artifacts are put into the */tmp* directory. However, on many Linux operating systems, the */tmp* directory contents are deleted on reboot. We suggest that you use code to remove the contents and not rely on the operating system to remove the contents. For example: ```bash sudo rm -R "/tmp/AppsAndImageBuilderLinux" ```
-
+ #### Total length of image build Total length can't be changed in the DevOps pipeline task yet. It uses the default of 240 minutes. If you want to increase the [buildTimeoutInMinutes](./image-builder-json.md#properties-buildtimeoutinminutes), you can use an Azure CLI task in the release pipeline. Configure the task to copy a template and submit it. For an example solution, see [Use environment variables and parameters with VM Image Builder](https://github.com/danielsollondon/azvmimagebuilder/tree/master/solutions/4_Using_ENV_Variables#using-environment-variables-and-parameters-with-image-builder), or use Azure PowerShell. - #### Storage account Select the storage account you created in the prerequisites. If you don't see it in the list, VM Image Builder doesn't have permissions to it.
The following three distribute types are supported.
* Resource ID:
- ```bash
+ ```azurecli-interactive
/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/images/<imageName> ```
The following three distribute types are supported.
The Compute Gallery must already exist.
-* Resource ID:
+* Resource ID:
- ```bash
+ ```azurecli-interactive
/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<galleryName>/images/<imageDefName> ```
The task uses the properties that are passed to the task to create the VM Image
Example output:
-```text
+```output
start reading task parameters... found build at: /home/vsts/work/r1/a/_ImageBuilding/webapp end reading parameters
starting put template...
When the image build starts, the run status is reported in the release logs:
-```text
+```output
starting run template... ``` When the image build finishes, the output is similar to following text:
-```text
+```output
2019-05-06T12:49:52.0558229Z starting run template... 2019-05-06T13:36:33.8863094Z run template: Succeeded 2019-05-06T13:36:33.8867768Z getting runOutput for SharedImage_distribute
You can take the `$(imageUri)` Azure DevOps Services (formerly Visual Studio Tea
## Output DevOps variables
-Here are the publisher, offer, SKU, and version of the source marketplace image:
+Here is the publisher, offer, SKU, and version of the source marketplace image:
* `$(pirPublisher)` * `$(pirOffer)`
template name: t_1556938436xxx
The VM Image Builder template resource artifact is in the resource group that was specified initially in the task. When you're done troubleshooting, delete the artifact. If you're deleting it by using the Azure portal, within the resource group, select **Show Hidden Types** to view the artifact. - ## Next steps
-For more information, see [VM Image Builder overview](../image-builder-overview.md).
+For more information, see [VM Image Builder overview](../image-builder-overview.md).
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-gallery.md
description: Learn how to use the Azure Image Builder, and the Azure CLI, to cre
Previously updated : 03/02/2021 Last updated : 04/11/2023
This article shows you how you can use the Azure Image Builder, and the Azure CLI, to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery), then distribute the image globally. You can also do this using [Azure PowerShell](../windows/image-builder-gallery.md). - We will be using a sample .json template to configure the image. The .json file we are using is here: [helloImageTemplateforSIG.json](https://github.com/azure/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json). To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template. - ## Register the features To use Azure Image Builder, you need to register the feature.
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Network ```
-## Set variables and permissions
+## Set variables and permissions
We will be using some pieces of information repeatedly, so we will create some variables to store that information.
az group create -n $sigResourceGroup -l $location
``` ## Create a user-assigned identity and set permissions on the resource group+ Image Builder will use the [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) provided to inject the image into the Azure Compute Gallery. In this example, you will create an Azure role definition that has the granular actions to perform distributing the image to the gallery. The role definition will then be assigned to the user-identity.
-```bash
+```azurecli-interactive
# create user assigned identity for image builder to access the storage account where the script is located identityName=aibBuiUserId$(date +'%s') az identity create -g $sigResourceGroup -n $identityName
az role assignment create \
--scope /subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup ``` - ## Create an image definition and gallery To use Image Builder with an Azure Compute Gallery, you need to have an existing gallery and image definition. Image Builder will not create the gallery and image definition for you.
az sig image-definition create \
--os-type Linux ``` - ## Download and configure the .json Download the .json template and configure it with your variables.
az resource invoke-action \
Creating the image and replicating it to both regions can take a while. Wait until this part is finished before moving on to creating a VM. - ## Create the VM Create a VM from the image version that was created by Azure Image Builder.
You should see the image was customized with a *Message of the Day* as soon as y
If you want to now try re-customizing the image version to create a new version of the same image, skip the next steps and go on to [Use Azure Image Builder to create another image version](image-builder-gallery-update-image-version.md). - This will delete the image that was created, along with all of the other resource files. Make sure you are finished with this deployment before deleting the resources. When deleting gallery resources, you need delete all of the image versions before you can delete the image definition used to create them. To delete a gallery, you first need to have deleted all of the image definitions in the gallery.
az resource delete \
``` Delete permissions assignments, roles and identity+ ```azurecli-interactive az role assignment delete \ --assignee $imgBuilderCliId \
az sig image-version delete \
--gallery-name $sigName \ --gallery-image-definition $imageDefName \ --subscription $subscriptionID
-```
-
+```
Delete the image definition.
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
description: This article helps you troubleshoot common problems and errors you
Previously updated : 02/10/2023 Last updated : 04/12/2023
Use this article to troubleshoot and resolve common issues that you might encoun
## Prerequisites When you're creating a build, do the following:
-
+ - The VM Image Builder service communicates to the build VM by using WinRM or Secure Shell (SSH). Do *not* disable these settings as part of the build. - VM Image Builder creates resources as part of the build. Be sure to verify that Azure Policy doesn't prevent VM Image Builder from creating or using necessary resources. - Create an IT_ resource group. - Create a storage account without a firewall. - Verify that Azure Policy doesn't install unintended features on the build VM, such as Azure Extensions.-- Ensure that VM Image Builder has the correct permissions to read/write images and to connect to the storage account. For more information, review the permissions documentation for the [Azure CLI](./image-builder-permissions-cli.md) or [Azure PowerShell](./image-builder-permissions-powershell.md).
+- Ensure that VM Image Builder has the correct permissions to read/write images and to connect to the storage account. For more information, review the permissions documentation for the [Azure CLI](./image-builder-permissions-cli.md) or [Azure PowerShell](./image-builder-permissions-powershell.md).
- VM Image Builder will fail the build if the scripts or inline commands fail with errors (non-zero exit codes). Ensure that you've tested the custom scripts and verified that they run without error (exit code 0) or require user input. For more information, see [Create an Azure Virtual Desktop image by using VM Image Builder and PowerShell](../windows/image-builder-virtual-desktop.md#tips-for-building-windows-images). VM Image Builder failures can happen in two areas:+ - During image template submission - During image building
VM Image Builder failures can happen in two areas:
Image template submission errors are returned at submission only. There isn't an error log for image template submission errors. If there's an error during submission, you can return the error by checking the status of the template, specifically by reviewing `ProvisioningState` and `ProvisioningErrorMessage`/`provisioningError`.
-```azurecli
+```azurecli-interactive
az image builder show --name $imageTemplateName --resource-group $imageResourceGroup ``` ```azurepowershell-interactive Get-AzImageBuilderTemplate -ImageTemplateName <imageTemplateName> -ResourceGroupName <imageTemplateResourceGroup> | Select-Object ProvisioningState, ProvisioningErrorMessage ```+ > [!NOTE] > For PowerShell, you'll need to install the [VM Image Builder PowerShell modules](../windows/image-builder-powershell.md#prerequisites). > [!IMPORTANT] > API version 2021-10-01 introduces a change to the error schema that will be part of every future API release. If you have any Azure VM Image Builder automations, be aware of the [new error output](#error-output-for-version-2021-10-01-and-later) when you switch to API version 2021-10-01 or later. We recommend, after you've switched to the latest API version, that you don't revert to an earlier version, because you'll have to change your automation again to produce the earlier error schema. We don't anticipate that we'll change the error schema again in future releases.
-##### **Error output for version 2020-02-14 and earlier**
+### **Error output for version 2020-02-14 and earlier**
-```
+```output
{ "code": "ValidationFailed", "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template." } ```
-##### **Error output for version 2021-10-01 and later**
+### **Error output for version 2021-10-01 and later**
-```
+```output
{ "error": { "code": "ValidationFailed",
The following sections present problem resolution guidance for common image temp
#### Error
-```text
+```output
'Conflict'. Details: Update/Upgrade of image templates is currently not supported ```
If you submit an image configuration template and the submission fails, a failed
#### Error
-```text
+```output
The assigned managed identity cannot be used. Please remove the existing one and re-assign a new identity. For more troubleshooting steps go to https://aka.ms/azvmimagebuilderts. ``` #### Cause
-There are cases where [Managed Service Identities (MSI)](./image-builder-permissions-cli.md#create-a-user-assigned-managed-identity) assigned to the image template cannot be used:
+There are cases where [Managed Service Identities (MSI)](./image-builder-permissions-cli.md#create-a-user-assigned-managed-identity) assigned to the image template cannot be used:
1. The Image Builder template uses a customer provided staging resource group and the MSI is deleted before the image template is deleted ([staging resource group](./image-builder-json.md#properties-stagingresourcegroup) scenario) 1. The identity is deleted and attempted to recreate the identity with the same name, but without re-assigning the MSI. Though the resource ids are the same, the underlying service principal has been changed. - #### Solution Use Azure CLI to reset identity on the image template. Ensure you [update](/cli/azure/update-azure-cli) Azure CLI to the 2.45.0 version or later.
az image builder identity assign -g <template rg> -n <template name> --user-assi
#### Error
-```text
+```output
Microsoft.VirtualMachineImages/imageTemplates 'helloImageTemplateforSIG01' failed with message '{ "status": "Failed", "error": {
Microsoft.VirtualMachineImages/imageTemplates 'helloImageTemplateforSIG01' faile
"code": "InternalOperationError", "message": "Internal error occurred." ```+ #### Cause In most cases, the resource deployment failure error occurs because of missing permissions. This error may also be caused by a conflict with the staging resource group.
In most cases, the resource deployment failure error occurs because of missing p
#### Solution Depending on your scenario, VM Image Builder might need permissions to:+ - The source image or Azure Compute Gallery (formerly Shared Image Gallery) resource group. - The distribution image or Azure Compute Gallery resource. - The storage account, container, or blob that the `File` customizer is accessing.
For more information about configuring permissions, see [Configure VM Image Buil
#### Error
-```text
+```output
Build (Managed Image) step failed: Error getting Managed Image '/subscriptions/.../providers/Microsoft.Compute/images/mymanagedmg1': Error getting managed image (...): compute. ImagesClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '......' with object id '......' doesn't have authorization to perform action 'Microsoft.Compute/images/read' over scope ```+ #### Cause Missing permissions.
Missing permissions.
#### Solution Depending on your scenario, VM Image Builder might need permissions to:+ - The source image or Azure Compute Gallery resource group. - The distribution image or Azure Compute Gallery resource.-- The storage account, container, or blob that the `File` customizer is accessing.
+- The storage account, container, or blob that the `File` customizer is accessing.
For more information about configuring permissions, see [Configure VM Image Builder permissions by using the Azure CLI](image-builder-permissions-cli.md) or [Configure VM Image Builder permissions by using PowerShell](image-builder-permissions-powershell.md).
For more information about configuring permissions, see [Configure VM Image Buil
#### Error
-```text
+```output
Build (Shared Image Version) step failed for Image Version '/subscriptions/.../providers/Microsoft.Compute/galleries/.../images/... /versions/0.23768.4001': Error getting Image Version '/subscriptions/.../resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/.../images/.../versions/0.23768.4001': Error getting image version '... :0.23768.4001': compute.GalleryImageVersionsClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/galleries/.../images/.../versions/0.23768.4001' under resource group '<rgName>' was not found." ```
Ensure that the source image is correct and exists in the location of VM Image B
#### Error
-```text
+```output
Downloading external file (<myFile>) to local file (xxxxx.0.customizer.fp) [attempt 1 of 10] failed: Error downloading '<myFile>' to 'xxxxx.0.customizer.fp'.. ```
The Azure Image Builder build fails with an authorization error that looks like
#### Error
-```text
+```output
Attempting to deploy created Image template in Azure fails with an 'The client '6df325020-fe22-4e39-bd69-10873965ac04' with object id '6df325020-fe22-4e39-bd69-10873965ac04' does not have authorization to perform action 'Microsoft.Compute/disks/write' over scope '/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/disks/proxyVmDiskWin_<timestamp>' or the scope is invalid. If access was recently granted, please refresh your credentials.' ```+ #### Cause This error is caused when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image.
This error is caused when trying to specify a pre-existing resource group and VN
You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app by using the CLI command or portal instructions below. First, validate that the service principal is associated with Azure Image Builder's first party app by using the following CLI command:+ ```azurecli-interactive az ad sp show --id {servicePrincipalName, or objectId} ``` Then, to implement this solution using CLI, use the following command:+ ```azurecli-interactive az role assignment create -g {ResourceGroupName} --assignee {AibrpSpOid} --role Contributor ```
For [Step 1: Identify the needed scope](../../role-based-access-control/role-ass
For [Step 3: Select the appropriate role](../../role-based-access-control/role-assignments-portal.md#step-3-select-the-appropriate-role): The role is Contributor.
-For [Step 4: Select who needs access](../../role-based-access-control/role-assignments-portal.md#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
+For [Step 4: Select who needs access](../../role-based-access-control/role-assignments-portal.md#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
Then proceed to [Step 6: Assign role](../../role-based-access-control/role-assignments-portal.md#step-6-assign-role) to assign the role.
Then proceed to [Step 6: Assign role](../../role-based-access-control/role-assig
For image build failures, get the error from the `lastrunstatus`, and then review the details in the *customization.log* file. -
-```azurecli
+```azurecli-interactive
az image builder show --name $imageTemplateName --resource-group $imageResourceGroup ```
When the image build is running, logs are created and stored in a storage accoun
The storage account name uses the pattern IT_\<ImageResourceGroupName\>_\<TemplateName\>_\<GUID\> (for example, *IT_aibmdi_helloImageTemplateLinux01*).
-To view the *customization.log* file in the resource group, select **Storage Account** > **Blobs** > `packerlogs`, select **directory**, and then select the *customization.log* file.
-
+To view the `customization.log` file in the resource group, select **Storage Account** > **Blobs** > `packerlogs`, select **directory**, and then select the *customization.log* file.
### Understand the customization log The log is verbose. It covers the image build, including any issues with the image distribution, such as Azure Compute Gallery replication. These errors are surfaced in the error message of the image template status.
-The *customization.log* file includes the following stages:
+The `customization.log` file includes the following stages:
1. *Deploy the build VM and dependencies by using ARM templates to the IT_ staging resource group* stage. This stage includes multiple POSTs to the VM Image Builder resource provider:
- ```text
+ ```output
Azure request method="POST" request="https://management.azure.com/subscriptions/<subID>/resourceGroups/IT_aibImageRG200_window2019VnetTemplate01_dec33089-1cc3-cccc-cccc-ccccccc/providers/Microsoft.Storage/storageAccounts .. PACKER OUT ==> azure-arm: Deploying deployment template ...
The *customization.log* file includes the following stages:
1. *Status of the deployments* stage. This stage includes the status of each resource deployment:
- ```text
+ ```output
PACKER ERR 2020/04/30 23:28:50 packer: 2020/04/30 23:28:50 Azure request method="GET" request="https://management.azure.com/subscriptions/<subID>/resourcegroups/IT_aibImageRG200_window2019VnetTemplate01_dec33089-1cc3-4505-ae28-6661e43fac48/providers/Microsoft.Resources/deployments/pkrdp51lc0339jg/operationStatuses/08586133176207523519?[REDACTED]" body="" ```
The *customization.log* file includes the following stages:
In Windows, VM Image Builder connects by using WinRM:
- ```text
+ ```output
PACKER ERR 2020/04/30 23:30:50 packer: 2020/04/30 23:30:50 Waiting for WinRM, up to timeout: 10m0s .. PACKER OUT azure-arm: WinRM connected.
The *customization.log* file includes the following stages:
In Linux, VM Image Builder connects by using SSH:
- ```text
+ ```output
PACKER OUT ==> azure-arm: Waiting for SSH to become available... PACKER ERR 2019/12/10 17:20:51 packer: 2020/04/10 17:20:51 [INFO] Waiting for SSH, up to timeout: 20m0s PACKER OUT ==> azure-arm: Connected to SSH!
The *customization.log* file includes the following stages:
(telemetry) Finalizing. - This means the build hasfinished ```+ 1. *Deprovision* stage. VM Image Builder adds a hidden customizer. This deprovision step is responsible for preparing the VM for deprovisioning. In Windows, it runs `Sysprep` (by using *c:\DeprovisioningScript.ps1*). In Linux, it runs `waagent-deprovision` (by using /tmp/DeprovisioningScript.sh). For example:+ ```text PACKER ERR 2020/03/04 23:05:04 [INFO] (telemetry) Starting provisioner powershell PACKER ERR 2020/03/04 23:05:04 packer: 2020/03/04 23:05:04 Found command: if( TEST-PATH c:\DeprovisioningScript.ps1 ){cat c:\DeprovisioningScript.ps1} else {echo "Deprovisioning script [c:\DeprovisioningScript.ps1] could not be found. Image build may fail or the VM created from the Image may not boot. Please make sure the deprovisioning script is not accidentally deleted by a Customizer in the Template."}
The *customization.log* file includes the following stages:
... PACKER ERR ==> azure-arm: The resource group was not created by Packer, not deleting ... ```+ ## Tips for troubleshooting script or inline customization+ - Test the code before you supply it to VM Image Builder. - Ensure that Azure Policy and Firewall allow connectivity to remote resources. - Output comments to the console by using `Write-Host` or `echo`. Doing so lets you search the *customization.log* file.
Customization failure.
Review the log to locate customizer failures. Search for *(telemetry)*. For example:+ ```text (telemetry) Starting provisioner windows-update (telemetry) ending windows-update
The build exceeded the build time-out. This error is seen in the 'lastrunstatus'
#### Solution
-1. Review the *customization.log* file. Identify the last customizer to run. Search for *(telemetry)*, starting from the bottom of the log.
+1. Review the *customization.log* file. Identify the last customizer to run. Search for *(telemetry)*, starting from the bottom of the log.
1. Check script customizations. The customizations might not be suppressing user interaction for commands, such as `quiet` options. For example, `apt-get install -y` results in the script execution waiting for user interaction.
The build exceeded the build time-out. This error is seen in the 'lastrunstatus'
### Long file download time #### Error+ ```text [086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT myBigFile.zip 826 B / 826000 B 1.00%
hours later...
myBigFile.zip 826000 B / 826000 B 100.00% [086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT ```+ #### Cause `File` customizer is downloading a large file.
Deployment failed. Correlation ID: XXXXXX-XXXX-XXXXXX-XXXX-XXXXXX. Failed in dis
VM Image Builder timed out waiting for the image to be added and replicated to Azure Compute Gallery. If the image is being injected into the gallery, you can assume that the image build was successful. However, the overall process failed because VM Image Builder was waiting on Azure Compute Gallery to complete the replication. Even though the build has failed, the replication continues. You can get the properties of the image version by checking the distribution *runOutput*.
-```azurecli
+```azurecli-interactive
$runOutputName=<distributionRunOutput> az resource show \ --ids "/subscriptions/$subscriptionID/resourcegroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/imageTemplates/$imageTemplateName/runOutputs/$runOutputName" \
az resource show \
#### Solution Increase the value of `buildTimeoutInMinutes`.
-
+ ### Low Windows resource information events #### Error
Increase the value of `buildTimeoutInMinutes`.
[45f485cf-5a8c-4379-9937-8d85493bc791] PACKER OUT Build 'azure-arm' errored: unexpected EOF [45f485cf-5a8c-4379-9937-8d85493bc791] PACKER OUT ```+ #### Cause Resource exhaustion. This issue is commonly seen with Windows Update running with the default build VM size D1_V2.
Increase the build VM size.
[a170b40d-2d77-4ac3-8719-72cdc35cf889] PACKER ERR 2020/04/30 22:29:24 waiting for all plugin processes to complete... Done exporting Packer logs to Azure for Packer prefix: [a170b40d-2d77-4ac3-8719-72cdc35cf889] PACKER OUT ```+ #### Cause The build timed out while it was waiting for the required Azure resources to be created.
Rerun the build to try again.
#### Error
-```text
+```output
"provisioningState": "Succeeded", "lastRunStatus": { "startTime": "2020-05-01T00:13:52.599326198Z",
Rerun the build to try again.
"message": "network.InterfacesClient#UpdateTags: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code=\"ResourceNotFound\" Message=\"The Resource 'Microsoft.Network/networkInterfaces/aibpls7lz2e.nic.4609d697-be0a-4cb0-86af-49b6fe877fe1' under resource group 'IT_aibImageRG200_window2019VnetTemplate01_9988723b-af56-413a-9006-84130af0e9df' was not found.\"" }, ```+ #### Cause Missing permissions.
For more information about configuring permissions, see [Configure VM Image Buil
[922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER OUT ==> azure-arm: The resource group was not created by Packer, deleting individual resources ... [922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER ERR ==> azure-arm: The resource group was not created by Packer, deleting individual resources ... ```+ #### Cause
-The cause might be a timing issue because of the D1_V2 VM size. If customizations are limited and are run in less than three seconds, `Sysprep` commands are run by VM Image Builder to deprovision. When VM Image Builder deprovisions, the `Sysprep` command checks for the *WindowsAzureGuestAgent*, which might not be fully installed and might cause the timing issue.
+The cause might be a timing issue because of the D1_V2 VM size. If customizations are limited and are run in less than three seconds, `Sysprep` commands are run by VM Image Builder to deprovision. When VM Image Builder deprovisions, the `Sysprep` command checks for the *WindowsAzureGuestAgent*, which might not be fully installed and might cause the timing issue.
#### Solution
To avoid the timing issue, you can increase the VM size or you can add a 60-seco
### The build is canceled after the context cancelation context is canceled #### Error+ ```text PACKER ERR 2020/03/26 22:11:23 Cancelling builder after context cancellation context canceled PACKER OUT Cancelling build after receiving terminated
PACKER ERR 2020/03/26 22:11:25 [INFO] 0 bytes written for 'stderr'
PACKER ERR 2020/03/26 22:11:25 [INFO] RPC client: Communicator ended with: 2300218 PACKER ERR 2020/03/26 22:11:25 [INFO] RPC endpoint: Communicator ended with: 2300218 ```+ #### Cause VM Image Builder uses port 22 (Linux) or 5986 (Windows) to connect to the build VM. This occurs when the service is disconnected from the build VM during an image build. The reasons for the disconnection can vary, but enabling or configuring a firewall in the script can block the previously mentioned ports. #### Solution+ Review your scripts for firewall changes or enablement, or changes to SSH or WinRM, and ensure that any changes allow for constant connectivity between the service and the build VM on the previously mentioned ports. For more information, see [VM Image Builder networking options](./image-builder-networking.md). ### JWT errors in log early in the build #### Error+ Early in the build process, the build fails and the log indicates a JSON Web Token (JWT) error: ```text
PACKER OUT 1 error(s) occurred:
The `buildTimeoutInMinutes` value in the template is set to from 1 to 5 minutes. #### Solution+ As described in [Create an VM Image Builder template](./image-builder-json.md), the time-out must be set to 0 to use the default or set to more than 5 minutes to override the default. Change the time-out in your template to 0 to use the default or to a minimum of 6 minutes. ### Resource deletion errors #### Error+ Intermediate resources are cleaned up toward the end of the build, and the customization log might show several resource deletion errors: ```text
PACKER ERR 2022/03/07 18:43:06 packer-plugin-azure plugin: 2022/03/07 18:43:06 R
``` #### Cause+ These error log messages are mostly harmless, because resource deletions are retried several times and, ordinarily, they eventually succeed. You can verify this by continuing to follow the deletion logs until you observe a success message. Alternatively, you can inspect the staging resource group to confirm whether the resource has been deleted. Making these observations is especially important in build failures, where these error messages might lead you to conclude that they're the reason for the failures, even when the actual errors might be elsewhere.
-## DevOps tasks
+## DevOps tasks
### Troubleshoot the task
-The task fails only if an error occurs during customization. When this happens, the task reports the failure and leaves the staging resource group, with the logs, so that you can identify the issue.
+
+The task fails only if an error occurs during customization. When this happens, the task reports the failure and leaves the staging resource group, with the logs, so that you can identify the issue.
To locate the log, you need to know the template name. Go to **pipeline** > **failed build**, and then drill down into the VM Image Builder DevOps task.
created archive /home/vsts/work/_temp/temp_web_package_21475337782320203.zip
Source for image: { type: 'SharedImageVersion', imageVersionId: '/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<galleryName>/images/<imageDefName>/versions/<imgVersionNumber>' } template name: t_1556938436xxx
-```
+```
1. Go to the Azure portal, search for the template name in the resource group, and then search for the resource group by typing **IT_***. 1. Select the storage account name > **blobs** > **containers** > **logs**.
template name: t_1556938436xxx
### Troubleshoot successful builds You might occasionally need to investigate successful builds and review their logs. As mentioned earlier, if the image build is successful, the staging resource group that contains the logs will be deleted as part of the cleanup. To prevent an automatic cleanup, though, you can introduce a `sleep` after the inline command, and then view the logs as the build is paused. To do so, do the following:
-
+ 1. Update the inline command by adding **Write-Host / Echo ΓÇ£SleepΓÇ¥**. This gives you time to search in the log. 1. Add a `sleep` value of at least 10 minutes by using a [Start-Sleep](/powershell/module/microsoft.powershell.utility/start-sleep) or `Sleep` Linux command. 1. Use this method to identify the log location, and then keep downloading or checking the log until it gets to `sleep`. - ### Operation was canceled #### Error
You might occasionally need to investigate successful builds and review their l
2020-05-05T19:33:14.3923479Z ##[error]The operation was canceled. 2020-05-05T19:33:14.3939721Z ##[section]Finishing: Azure VM Image Builder Task ```+ #### Cause If the build wasn't canceled by a user, it was canceled by Azure DevOps User Agent. Most likely, the 1-hour time-out has occurred because of Azure DevOps capabilities. If you're using a private project and agent, you get 60 minutes of build time. If the build exceeds the time-out, DevOps cancels the running task. For more information about Azure DevOps capabilities and limitations, see [Microsoft-hosted agents](/azure/devops/pipelines/agents/hosted#capabilities-and-limitations).
-
+ #### Solution You can host your own DevOps agents or look to reduce the time of your build. For example, if you're distributing to Azure Compute Gallery, you can replicate them to one region or replicate them asynchronously.
Please wait for the Windows Modules Installer
1. In the image build, check to ensure that:
- * There are no outstanding reboots required by adding a Windows Restart customizer as the last customization.
- * All software installation is complete.
-
+ - There are no outstanding reboots required by adding a Windows Restart customizer as the last customization.
+ - All software installation is complete.
+ 1. Add the [/mode:vm](/windows-hardware/manufacture/desktop/sysprep-command-line-options) option to the default `Sysprep` that VM Image Builder uses. For more information, go to the ["Override the commands"](#override-the-commands) section under "VMs created from VM Image Builder images aren't created successfully."
-
## VMs created from VM Image Builder images aren't created successfully By default, VM Image Builder runs *deprovision* code at the end of each image customization phase to *generalize* the image. To generalize an image is to set it up to reuse to create multiple VMs. As part of the process, you can pass in VM settings, such as hostname, username, and so on. In Windows, VM Image Builder runs `Sysprep`, and in Linux, VM Image Builder runs `waagent -deprovision`.
In Windows, VM Image Builder uses a generic `Sysprep` command. However, this com
If you're migrating an existing customization and you're using various `Sysprep` or `waagent` commands, you can try the VM Image Builder generic commands. If the VM creation fails, use your previous `Sysprep` or `waagent` commands. Let's suppose you've used VM Image Builder successfully to create a Windows custom image, but you've failed to create a VM successfully from the image. For example, the VM creation fails to finish or it times out. In this event, do either of the following:+ * Review the Windows Server Sysprep documentation. * Raise a support request with the Windows Server Sysprep Customer Services Support team. They can help troubleshoot your issue and advise you on the correct `Sysprep` command.
c:\DeprovisioningScript.ps1
``` In Linux:+ ```bash /tmp/DeprovisioningScript.sh ``` ### The `Sysprep` command: Windows
-```PowerShell
+```azurepowershell-interactive
Write-Output '>>> Waiting for GA Service (RdAgent) to start ...' while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 } Write-Output '>>> Waiting for GA Service (WindowsAzureTelemetryService) to start ...'
Write-Output '>>> Sysprep complete ...'
### The `-deprovision` command: Linux ```bash
-/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync
+sudo /usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync
``` ### Override the commands
Write-Output '>>> Sysprep complete ...'
To override the commands, use the PowerShell or shell script provisioners to create the command files with the exact file name and put them in the previously listed directories. VM Image Builder reads these commands and writes output to the *customization.log* file. ## Get support+ If you've referred to the guidance and are still having problems, you can open a support case. Be sure to select the correct product and support topic. Doing so will ensure that you're connected with the Azure VM Image Builder support team. Selecting the case product:
-```bash
+
+```text
Product Family: Azure Product: Virtual Machine Running (Window\Linux) Support Topic: Azure Features
virtual-machines Mac Create Ssh Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/mac-create-ssh-keys.md
Previously updated : 01/19/2023 Last updated : 04/11/2023
ssh-keygen -m PEM -t rsa -b 4096
> [!NOTE] > You can also create key pairs with the [Azure CLI](/cli/azure) with the [az sshkey create](/cli/azure/sshkey#az-sshkey-create) command, as described in [Generate and store SSH keys](../ssh-keys-azure-cli.md).
-If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair won't be generated but instead the existing key pair will be used. In the following command, replace *VMname* and *RGname* with your own values:
+If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair won't be generated but instead the existing key pair will be used. In the following command, replace *VMname*, *RGname* and *UbuntuLTS* with your own values:
-```azurecli
+```azurecli-interactive
az vm create --name VMname --resource-group RGname --image UbuntuLTS --generate-ssh-keys ```
cat ~/.ssh/id_rsa.pub
A typical public key value looks like this example:
-```
+```output
ssh-rsa AAAAB3NzaC1yc2EAABADAQABAAACAQC1/KanayNr+Q7ogR5mKnGpKWRBQU7F3Jjhn7utdf7Z2iUFykaYx+MInSnT3XdnBRS8KhC0IP8ptbngIaNOWd6zM8hB6UrcRTlTpwk/SuGMw1Vb40xlEFphBkVEUgBolOoANIEXriAMvlDMZsgvnMFiQ12tD/u14cxy1WNEMAftey/vX3Fgp2vEq4zHXEliY/sFZLJUJzcRUI0MOfHXAuCjg/qyqqbIuTDFyfg8k0JTtyGFEMQhbXKcuP2yGx1uw0ice62LRzr8w0mszftXyMik1PnshRXbmE2xgINYg5xo/ra3mq2imwtOKJpfdtFoMiKhJmSNHBSkK7vFTeYgg0v2cQ2+vL38lcIFX4Oh+QCzvNF/AXoDVlQtVtSqfQxRVG79Zqio5p12gHFktlfV7reCBvVIhyxc2LlYUkrq4DHzkxNY5c9OGSHXSle9YsO3F1J5ip18f6gPq4xFmo6dVoJodZm9N0YMKCkZ4k1qJDESsJBk2ujDPmQQeMjJX3FnDXYYB182ZCGQzXfzlPDC29cWVgDZEXNHuYrOLmJTmYtLZ4WkdUhLLlt5XsdoKWqlWpbegyYtGZgeZNRtOOdN6ybOPJqmYFd2qRtb4sYPniGJDOGhx4VodXAjT09omhQJpE6wlZbRWDvKC55R2d/CSPHJscEiuudb+1SG2uA/oik/WQ== username@domainname ```
If you copy and paste the contents of the public key file to use in the Azure po
The public key that you place on your Linux VM in Azure is by default stored in ~/.ssh/id_rsa.pub, unless you specified a different location when you created the key pair. To use the [Azure CLI 2.0](/cli/azure) to create your VM with an existing public key, specify the value and optionally the location of this public key using the [az vm create](/cli/azure/vm#az-vm-create) command with the `--ssh-key-values` option. In the following command, replace *myVM*, *myResourceGroup*, *UbuntuLTS*, *azureuser*, and *mysshkey.pub* with your own values: -
-```azurecli
+```azurecli-interactive
az vm create \ --resource-group myResourceGroup \ --name myVM \
az vm create \
If you want to use multiple SSH keys with your VM, you can enter them in a space-separated list, like this `--ssh-key-values sshkey-desktop.pub sshkey-laptop.pub`. - ## SSH into your VM With the public key deployed on your Azure VM, and the private key on your local system, SSH into your VM using the IP address or DNS name of your VM. In the following command, replace *azureuser* and *myvm.westus.cloudapp.azure.com* with the administrator user name and the fully qualified domain name (or IP address):
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md
The Scheduled Events service is versioned. Versions are mandatory; the current v
> Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future. ### Enabling and Disabling Scheduled Events
-Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes.
+Scheduled Events are enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events are disabled for your service if it doesn't make a request for 24 hours.
-Scheduled Events is disabled for your service if it doesn't make a request for 24 hours.
+Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile).
### User-initiated Maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance. If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This arrangement will prevent delays in recovering your application back to a good state. -
+
## Use the API ### Headers
In the case where there are scheduled events, the response contains an array of
| - | - | | Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. | | EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 |
-| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. |
+| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). This event is delivered on a best effort basis. <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. |
| ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished.
Each event is scheduled a minimum amount of time in the future based on the even
| Freeze| 15 minutes | | Reboot | 15 minutes | | Redeploy | 10 minutes |
-| Preempt | 30 seconds |
| Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes | Once an event is scheduled it will move into the started state after it is either approved or the not before time passes. However in rare cases the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array and the impact will not occur as previously scheduled.
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
Previously updated : 03/31/2020 Last updated : 04/12/2023
M-series VM's feature Intel&reg; Hyper-Threading Technology.
| Standard_M128 <sup>1</sup> | 128 | 2048 | 14336 | 64 | 250000/1600 (2456) | 250000/4000 | 80000/2000 | 80000/4000 | 8 | 32000 | | Standard_M128m <sup>1</sup> | 128 | 3892 | 14336 | 64 | 250000/1600 (2456) | 250000/4000 | 80000/2000 | 80000/4000 | 8 | 32000 |
-<sup>1</sup> More than 64 vCPU's require one of these supported guest versions: Windows Server 2016, Ubuntu 16.04 LTS, SLES 12 SP2, and Red Hat Enterprise Linux, CentOS 7.3 or Oracle Linux 7.3 with LIS 4.2.1.
+<sup>1</sup> More than 64 vCPU's require one of these supported guest versions: Windows Server 2016, Ubuntu 18.04+ LTS, SLES 12 SP2+, Red Hat Enterprise Linux 7/8/9, CentOS 7.3+ or Oracle Linux 7.3+ with LIS 4.2.1 or higher.
<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
virtual-machines Migration Classic Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-cli.md
Previously updated : 01/23/2023 Last updated : 04/12/2023
These steps show you how to use CLI commands to migrate infrastructure as a serv
> [!NOTE] > All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform will then try the action again.
->
->
+>
+>
<br> Here is a flowchart to identify the order in which steps need to be executed during a migration process
Here is a flowchart to identify the order in which steps need to be executed dur
![Screenshot that shows the migration steps](./media/migration-classic-resource-manager/migration-flow.png) ## Step 1: Prepare for migration+ Here are a few best practices that we recommend as you evaluate migrating IaaS resources from classic to Resource * Read through the [list of unsupported configurations or features](migration-classic-resource-manager-overview.md). If you have virtual machines that use unsupported configurations or features, we recommend that you wait for the feature/configuration support to be announced. Alternatively, you can remove that feature or move out of that configuration to enable migration if it suits your needs.
Here are a few best practices that we recommend as you evaluate migrating IaaS r
> Application Gateways are not currently supported for migration from classic to Resource Manager. To migrate a classic virtual network with an Application gateway, remove the gateway before running a Prepare operation to move the network. After you complete the migration, reconnect the gateway in Azure Resource Manager. > >ExpressRoute gateways connecting to ExpressRoute circuits in another subscription cannot be migrated automatically. In such cases, remove the ExpressRoute gateway, migrate the virtual network and recreate the gateway. Please see [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md) for more information.
->
->
+>
+>
## Step 2: Set your subscription and register the provider For migration scenarios, you need to set up your environment for both classic and Resource Manager. [Install the Azure classic CLI](/cli/azure/install-classic-cli) and [select your subscription](/cli/azure/authenticate-azure-cli). Sign-in to your account.
-```azurecli
+```azurecli-interactive
azure login ``` Select the Azure subscription by using the following command.
-```azurecli
+```azurecli-interactive
azure account set "<azure-subscription-name>" ```
azure account set "<azure-subscription-name>"
Register with the migration resource provider by using the following command. Note that in some cases, this command times out. However, the registration will be successful.
-```azurecli
+```azurecli-interactive
azure provider register Microsoft.ClassicInfrastructureMigrate ``` Please wait five minutes for the registration to finish. You can check the status of the approval by using the following command. Make sure that RegistrationState is `Registered` before you proceed.
-```azurecli
+```azurecli-interactive
azure provider show Microsoft.ClassicInfrastructureMigrate ``` Now switch CLI to the `asm` mode.
-```azurecli
+```azurecli-interactive
azure config mode asm ``` ## Step 3: Make sure you have enough Azure Resource Manager Virtual Machine vCPUs in the Azure region of your current deployment or VNET For this step you'll need to switch to `arm` mode. Do this with the following command.
-```azurecli
+```azurecli-interactive
azure config mode arm ``` You can use the following CLI command to check the current number of vCPUs you have in Azure Resource Manager. To learn more about vCPU quotas, see [Limits and the Azure Resource Manager](../azure-resource-manager/management/azure-subscription-service-limits.md#managing-limits).
-```azurecli
+```azurecli-interactive
azure vm list-usage -l "<Your VNET or Deployment's Azure region" ``` Once you're done verifying this step, you can switch back to `asm` mode.
-```azurecli
+```azurecli-interactive
azure config mode asm ``` ## Step 4: Option 1 - Migrate virtual machines in a cloud service+ Get the list of cloud services by using the following command, and then pick the cloud service that you want to migrate. Note that if the VMs in the cloud service are in a virtual network or if they have web/worker roles, you will get an error message.
-```azurecli
+```azurecli-interactive
azure service list ``` Run the following command to get the deployment name for the cloud service from the verbose output. In most cases, the deployment name is the same as the cloud service name.
-```azurecli
+```azurecli-interactive
azure service show <serviceName> -vv ``` First, validate if you can migrate the cloud service using the following commands:
-```shell
+```azurecli-interactive
azure service deployment validate-migration <serviceName> <deploymentName> new "" "" "" ```
Prepare the virtual machines in the cloud service for migration. You have two op
If you want to migrate the VMs to a platform-created virtual network, use the following command.
-```azurecli
+```azurecli-interactive
azure service deployment prepare-migration <serviceName> <deploymentName> new "" "" "" ``` If you want to migrate to an existing virtual network in the Resource Manager deployment model, use the following command.
-```azurecli
+```azurecli-interactive
azure service deployment prepare-migration <serviceName> <deploymentName> existing <destinationVNETResourceGroupName> <subnetName> <vnetName> ``` After the prepare operation is successful, you can look through the verbose output to get the migration state of the VMs and ensure that they are in the `Prepared` state.
-```azurecli
+```azurecli-interactive
azure vm show <vmName> -vv ``` Check the configuration for the prepared resources by using either CLI or the Azure portal. If you are not ready for migration and you want to go back to the old state, use the following command.
-```azurecli
+```azurecli-interactive
azure service deployment abort-migration <serviceName> <deploymentName> ``` If the prepared configuration looks good, you can move forward and commit the resources by using the following command.
-```azurecli
+```azurecli-interactive
azure service deployment commit-migration <serviceName> <deploymentName> ``` ## Step 4: Option 2 - Migrate virtual machines in a virtual network+ Pick the virtual network that you want to migrate. Note that if the virtual network contains web/worker roles or VMs with unsupported configurations, you will get a validation error message. Get all the virtual networks in the subscription by using the following command.
-```azurecli
+```azurecli-interactive
azure network vnet list ```
In the above example, the **virtualNetworkName** is the entire name **"Group cla
First, validate if you can migrate the virtual network using the following command:
-```shell
+```azurecli-interactive
azure network vnet validate-migration <virtualNetworkName> ``` Prepare the virtual network of your choice for migration by using the following command.
-```azurecli
+```azurecli-interactive
azure network vnet prepare-migration <virtualNetworkName> ``` Check the configuration for the prepared virtual machines by using either CLI or the Azure portal. If you are not ready for migration and you want to go back to the old state, use the following command.
-```azurecli
+```azurecli-interactive
azure network vnet abort-migration <virtualNetworkName> ``` If the prepared configuration looks good, you can move forward and commit the resources by using the following command.
-```azurecli
+```azurecli-interactive
azure network vnet commit-migration <virtualNetworkName> ``` ## Step 5: Migrate a storage account+ Once you're done migrating the virtual machines, we recommend you migrate the storage account. Prepare the storage account for migration by using the following command
-```azurecli
+```azurecli-interactive
azure storage account prepare-migration <storageAccountName> ``` Check the configuration for the prepared storage account by using either CLI or the Azure portal. If you are not ready for migration and you want to go back to the old state, use the following command.
-```azurecli
+```azurecli-interactive
azure storage account abort-migration <storageAccountName> ``` If the prepared configuration looks good, you can move forward and commit the resources by using the following command.
-```azurecli
+```azurecli-interactive
azure storage account commit-migration <storageAccountName> ```
virtual-machines Restore Point Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/restore-point-troubleshooting.md
Title: Troubleshoot restore point failures description: Symptoms, causes, and resolutions of restore point failures related to agent, extension, and disks. Previously updated : 07/13/2022 Last updated : 04/12/2023
Most common restore point failures can be resolved by following the troubleshoot
### Step 2: Check the health of Azure VM Guest Agent service **Ensure Azure VM Guest Agent service is started and up-to-date**:-- On a Windows VM:
- - Navigate to **services.msc** and ensure **Windows Azure VM Guest Agent service** is up and running. Also, ensure the [latest version](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409) is installed. [Learn more](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms).
- - The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image from the portal, PowerShell, Command Line Interface, or an Azure Resource Manager template. A [manual installation of the Agent](../virtual-machines/extensions/agent-windows.md#manual-installation) may be necessary when you create a custom VM image that's deployed to Azure.
- - Review the support matrix to check if VM runs on the [supported Windows operating system](concepts-restore-points.md#operating-system-support).
-- On Linux VM,
- - Ensure the Azure VM Guest Agent service is running by executing the command `ps -e`. Also, ensure the [latest version](../virtual-machines/extensions/update-linux-agent.md) is installed. [Learn more](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms).
- - Ensure the [Linux VM agent dependencies on system packages](../virtual-machines/extensions/agent-linux.md#requirements) have the supported configuration. For example: Supported Python version is 2.6 and above.
- - Review the support matrix to check if VM runs on the [supported Linux operating system.](concepts-restore-points.md#operating-system-support).
+
+# [On a Windows VM](#tab/windows)
+
+- Navigate to **services.msc** and ensure **Windows Azure VM Guest Agent service** is up and running. Also, ensure the [latest version](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409) is installed. [Learn more](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms).
+- The Azure VM Agent is installed by default on any Windows VM deployed from an Azure Marketplace image from the portal, PowerShell, Command Line Interface, or an Azure Resource Manager template. A [manual installation of the Agent](../virtual-machines/extensions/agent-windows.md#manual-installation) may be necessary when you create a custom VM image that's deployed to Azure.
+- Review the support matrix to check if VM runs on the [supported Windows operating system](concepts-restore-points.md#operating-system-support).
+
+# [On Linux VM](#tab/linux)
+
+- Ensure the Azure VM Guest Agent service is running by executing the command `ps -e`. Also, ensure the [latest version](../virtual-machines/extensions/update-linux-agent.md) is installed. [Learn more](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms).
+- Ensure the [Linux VM agent dependencies on system packages](../virtual-machines/extensions/agent-linux.md#requirements) have the supported configuration. For example: Supported Python version is 2.6 and above.
+- Review the support matrix to check if VM runs on the [supported Linux operating system.](concepts-restore-points.md#operating-system-support).
++ ### Step 3: Check the health of Azure VM Extension - **Ensure all Azure VM Extensions are in 'provisioning succeeded' state**: If any extension is in a failed state, then it can interfere with the restore point operation.
- - In the Azure portal, go to **Virtual machines** > **Settings** > **Extensions** > **Extensions status** and check if all the extensions are in **provisioning succeeded** state.
+ - In the Azure portal, go to **Virtual machines** > **Settings** > **Extensions** > **Extensions status** and check if all the extensions are in **provisioning succeeded** state.
- Ensure all [extension issues](../virtual-machines/extensions/overview.md#troubleshoot-extensions) are resolved and retry the restore point operation. - **Ensure COM+ System Application** is up and running. Also, the **Distributed Transaction Coordinator service** should be running as **Network Service account**.
Restore point creation fails if there are changes being made in parallel to the
**Error code**: OperationNotAllowed
-**Error message**: Operation 'Create Restore Point' is not allowed as disk(s) have not been allocated successfully. Please exclude these disk(s) using excludeDisks property and retry.
+**Error message**: Operation 'Create Restore Point' is not allowed as disk(s) have not been allocated successfully. Please exclude these disk(s) using excludeDisks property and retry.
If any one of the disks attached to the VM isn't allocated properly, the restore point fails. You must exclude these disks before triggering creation of restore points for the VM. If you're using ARM processor API to create a restore point, to exclude a disk, add its identifier to the excludeDisks property in the request body. If you're using [CLI](virtual-machines-create-restore-points-cli.md#exclude-disks-when-creating-a-restore-point), [PowerShell](virtual-machines-create-restore-points-powershell.md#exclude-disks-from-the-restore-point), or [Portal](virtual-machines-create-restore-points-portal.md#step-2-create-a-vm-restore-point), set the respective parameters.
If any one of the disks attached to the VM isn't allocated properly, the restore
**Error code**: VMRestorePointClientError
-**Error message**: Creation of Restore Point of a Virtual Machine with Shared disks is not supported. You may exclude this disk from the restore point via excludeDisks property.
+**Error message**: Creation of Restore Point of a Virtual Machine with Shared disks is not supported. You may exclude this disk from the restore point via excludeDisks property.
Restore points are currently not supported for shared disks. You need to exclude these disks before triggering creation of restore point for the VM. If you are using ARM processor API to create restore point, to exclude a disk, add its identifier to the excludeDisks property in the request body. If you are using [CLI](virtual-machines-create-restore-points-cli.md#exclude-disks-when-creating-a-restore-point), [PowerShell](virtual-machines-create-restore-points-powershell.md#exclude-disks-from-the-restore-point), or [Portal](virtual-machines-create-restore-points-portal.md#step-2-create-a-vm-restore-point), follow the respective steps. ### VMAgentStatusCommunicationError - VM agent unable to communicate with compute service
-**Error code**: VMAgentStatusCommunicationError
+**Error code**: VMAgentStatusCommunicationError
**Error message**: VM has not reported status for VM agent or extensions.
This error could also occur when one of the extension failures puts the VM into
**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed due to COM+ error. Please restart windows service "COM+ System Application" (COMSysApp). If the issue persists, restart the VM.
+**Error message**: Restore Point creation failed due to COM+ error. Please restart windows service "COM+ System Application" (COMSysApp). If the issue persists, restart the VM.
Restore point operations fail if the COM+ service is not running or if there are any errors with this service. Restart the COM+ System Application, and restart the VM and retry the restore point operation.
Restore point operations require Visual C++ Redistributable for Visual Studio 20
**Error message**: Restore Point creation failed as the maximum allowed snapshot limit of one or more disk blobs has been reached. Please delete some existing restore points of this VM and then retry.
-The number of restore points across the restore point collections and resource groups for a VM can't exceed 500. To create a new restore point, delete the existing restore points.
+The number of restore points across the restore point collections and resource groups for a VM can't exceed 500. To create a new restore point, delete the existing restore points.
### VMRestorePointClientError - Restore Point creation failed with the error "COM+ was unable to talk to the Microsoft Distributed Transaction Coordinator".
The number of restore points across the restore point collections and resource g
**Error message**: Restore Point creation failed with the error "COM+ was unable to talk to the Microsoft Distributed Transaction Coordinator". Follow these steps to resolve this error:
+- Open services.msc from an elevated command prompt
+- Make sure that **Log On As** value for **Distributed Transaction Coordinator** service is set to **Network Service** and the service is running.
+- If this service fails to start, reinstall this service.
### VMRestorePointClientError - Restore Point creation failed due to inadequate VM resources.
After you trigger creation of restore point, the compute service starts communic
**Error message**: RestorePoint creation failed since a concurrent 'Create RestorePoint' operation was triggered on the VM. Your recent restore point creation failed because there's already an existing restore point being created. You can't create a new restore point until the current restore point is fully created. Ensure the restore point creation operation currently in progress is completed before triggering another restore point creation operation.
-
+ To check the restore points in progress, do the following steps: 1. Sign in to the Azure portal, select **All services**. Enter **Recovery Services** and select **Restore point collection**. The list of Restore point collections appears.
The VM agent might have been corrupted, or the service might have been stopped.
5. Verify that the Microsoft Azure Guest Agent services appear in services. 6. Retry the restore point operation. - Also, verify that [Microsoft .NET 4.5 is installed](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) in the VM. .NET 4.5 is required for the VM agent to communicate with the service. ### The agent installed in the VM is out of date (for Linux VMs)
Most agent-related or extension-related failures for Linux VMs are caused by iss
If the process isn't running, restart it by using the following commands:
- - For Ubuntu: `service walinuxagent start`
- - For other distributions: `service waagent start`
+ - For Ubuntu/Debian:
+
+ ```bash
+ sudo systemctl start walinuxagent
+ ```
+
+ - For other Linux distributions:
+
+ ```bash
+ sudo systemctl start waagent
+ ```
3. [Configure the auto restart agent](https://github.com/Azure/WALinuxAgent/wiki/Known-Issues#mitigate_agent_crash). 4. Retry the restore point operation. If the failure persists, collect the following logs from the VM:
Most agent-related or extension-related failures for Linux VMs are caused by iss
If you require verbose logging for waagent, follow these steps:
-1. In the /etc/waagent.conf file, locate the following line: **Enable verbose logging (y|n)**.
+1. In the `/etc/waagent.conf` file, locate the following line: **Enable verbose logging (y|n)**.
2. Change the **Logs.Verbose** value from *n* to *y*. 3. Save the change, and then restart waagent by completing the steps described earlier in this section. ### VM-Agent configuration options are not set (for Linux VMs)
-A configuration file (/etc/waagent.conf) controls the actions of waagent. Configuration File Options **Extensions.Enable** should be set to **y** and **Provisioning.Agent** should be set to **auto** for restore points to work.
+A configuration file (`/etc/waagent.conf`) controls the actions of waagent. Configuration File Options **Extensions.Enable** should be set to **y** and **Provisioning.Agent** should be set to **auto** for restore points to work.
For the full list of VM-Agent Configuration File Options, see https://github.com/Azure/WALinuxAgent#configuration-file-options. ### Application control solution is blocking IaaSBcdrExtension.exe
The following conditions might cause the snapshot task to fail:
3. In the **Settings** section, select **Locks** to display the locks. 4. To remove the lock, select **Delete**.
- :::image type="content" source="./media/restore-point-troubleshooting/delete-lock-inline.png" alt-text="Screenshot of Delete lock in Azure portal." lightbox="./media/restore-point-troubleshooting/delete-lock-expanded.png":::
+ :::image type="content" source="./media/restore-point-troubleshooting/delete-lock-inline.png" alt-text="Screenshot of Delete lock in Azure portal." lightbox="./media/restore-point-troubleshooting/delete-lock-expanded.png":::
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Previously updated : 03/28/2023 Last updated : 04/12/2023
VM Applications are a resource type in Azure Compute Gallery (formerly known as Shared Image Gallery) that simplifies management, sharing, and global distribution of applications for your virtual machines. --
-While you can create an image of a VM with apps pre-installed, you would need to update your image each time you have application changes. Separating your application installation from your VM images means thereΓÇÖs no need to publish a new image for every line of code change.
+While you can create an image of a VM with apps preinstalled, you would need to update your image each time you have application changes. Separating your application installation from your VM images means thereΓÇÖs no need to publish a new image for every line of code change.
Application packages provide benefits over other deployment and packaging methods:
Application packages provide benefits over other deployment and packaging method
- Support for virtual machines, and both flexible and uniform scale sets - - If you have Network Security Group (NSG) rules applied on your VM or scale set, downloading the packages from an internet repository might not be possible. And with storage accounts, downloading packages onto locked-down VMs would require setting up private links. --- ## What are VM app packages? The VM application packages use multiple resource types: | Resource | Description| |-||
-| **Azure compute gallery** | A gallery is a repository for managing and sharing application packages. Users can share the gallery resource and all the child resources will be shared automatically. The gallery name must be unique per subscription. For example, you may have one gallery to store all your OS images and another gallery to store all your VM applications.|
-| **VM application** | The definition of your VM application. It's a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
+| **Azure compute gallery** | A gallery is a repository for managing and sharing application packages. Users can share the gallery resource and all the child resources are shared automatically. The gallery name must be unique per subscription. For example, you may have one gallery to store all your OS images and another gallery to store all your VM applications.|
+| **VM application** | The definition of your VM application. It's a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
| **VM Application version** | The deployable resource. You can globally replicate your VM application versions to target regions closer to your VM infrastructure. The VM Application Version must be replicated to a region before it may be deployed on a VM in that region. | - ## Limitations -- **No more than 3 replicas per region**: When creating a VM Application version, the maximum number of replicas per region is three.
+- **No more than 3 replicas per region**: When you're creating a VM Application version, the maximum number of replicas per region is three.
- **Public access on storage**: Only public level access to storage accounts work, as other restriction levels fail deployments.
The VM application packages use multiple resource types:
- **Requires a VM Agent**: The VM agent must exist on the VM and be able to receive goal states. - **Multiple versions of same application on the same VM**: You can't have multiple versions of the same application on a VM.-- **Move operations currently not supported**: Moving VMs with VM Apps to other resource groups are not supported at this time.
+- **Move operations currently not supported**: Moving VMs with VM Apps to other resource groups aren't supported at this time.
> [!NOTE]
-> For Azure Compute Gallery and VM Applications, Storage SAS can be deleted after replication.
+> For Azure Compute Gallery and VM Applications, Storage SAS can be deleted after replication.
## Cost
-There's no extra charge for using VM Application Packages, but you'll be charged for the following resources:
+There's no extra charge for using VM Application Packages, but you're charged for the following resources:
- Storage costs of storing each package and any replicas. -- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no extra charges.
+- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no extra charges.
For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/). - ## VM applications The VM application resource defines the following about your VM application:
The VM application resource defines the following about your VM application:
## VM application versions VM application versions are the deployable resource. Versions are defined with the following properties:+ - Version number - Link to the application package file in a storage account - Install string for installing the application
VM application versions are the deployable resource. Versions are defined with t
- Configuration file name to be used to configure the app on the VM - A link to the configuration file for the VM application, which you can include license files - Update string for how to update the VM application to a newer version-- End-of-life date. End-of-life dates are informational; you'll still be able to deploy VM application versions past the end-of-life date.
+- End-of-life date. End-of-life dates are informational; you're still able to deploy VM application versions past the end-of-life date.
- Exclude from latest. You can keep a version from being used as the latest version of the application. - Target regions for replication - Replica count per region
-## Download directory
-
+## Download directory
+ The download location of the application package and the configuration files are:
-ΓÇ»
+ - Linux: `/var/lib/waagent/Microsoft.CPlat.Core.VMApplicationManagerLinux/<appname>/<app version> ` - Windows: `C:\Packages\Plugins\Microsoft.CPlat.Core.VMApplicationManagerWindows\1.0.9\Downloads\<appname>\<app version> ` - The install/update/remove commands should be written assuming the application package and the configuration file are in the current directory. ## File naming
-When the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that will be downloaded to the VM will also be named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it will be named `myApp_config`.
+When the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that is downloaded to the VM is also named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it's named `myApp_config`.
+
+For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name is `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like `myApp.exe`).
-For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name will be `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like myApp.exe).
+The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these other VM settings, see [UserArtifactSettings](/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs.
-The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these additional VM settings, refer to [UserArtifactSettings](/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs.
-
## Command interpreter The default command interpreters are:-- Linux: `/bin/sh` +
+- Linux: `/bin/bash`
- Windows: `cmd.exe` It's possible to use a different interpreter like Chocolatey or PowerShell, as long as it's installed on the machine, by calling the executable and passing the command to it. For example, to have your command run in PowerShell on Windows instead of cmd, you can pass `powershell.exe -Command '<powershell commmand>'`
-ΓÇ»
## How updates are handled
-When you update an application version on a VM or VMSS, the update command you provided during deployment will be used. If the updated version doesnΓÇÖt have an update command, then the current version will be removed and the new version will be installed.
+When you update an application version on a VM or Virtual Machine Scale Sets, the update command you provided during deployment is used. If the updated version doesn't have an update command, then the current version is removed and the new version is installed.
Update commands should be written with the expectation that it could be updating from any older version of the VM application. -
-## Tips for creating VM Applications on Linux
+## Tips for creating VM Applications on Linux
Third party applications for Linux can be packaged in a few ways. Let's explore how to handle creating the install commands for some of the most common.
-### .tar and .gz files
+### .tar and .gz files
-These are compressed archives and can be extracted to a desired location. Check the installation instructions for the original package to in case they need to be extracted to a specific location. If .tar.gz file contains source code, refer to the instructions for the package for how to install from source.
+These files are compressed archives and can be extracted to a desired location. Check the installation instructions for the original package to in case they need to be extracted to a specific location. If .tar.gz file contains source code, see the instructions for the package for how to install from source.
Example to install command to install `golang` on a Linux machine: ```bash
-tar -C /usr/local -xzf go_linux
+sudo tar -C /usr/local -xzf go_linux
``` Example remove command: ```bash
-rm -rf /usr/local/go
+sudo rm -rf /usr/local/go
```
-### .deb, .rpm, and other platform specific packages
+### Creating application packages using `.deb`, `.rpm`, and other platform specific packages for VMs with restricted internet access
+ You can download individual packages for platform specific package managers, but they usually don't contain all the dependencies. For these files, you must also include all dependencies in the application package, or have the system package manager download the dependencies through the repositories that are available to the VM. If you're working with a VM with restricted internet access, you must package all the dependencies yourself.
+Figuring out the dependencies can be a bit tricky. There are third party tools that can show you the entire dependency tree.
+
+# [Ubuntu](#tab/ubuntu)
+
+In Ubuntu, you can run `sudo apt show <package_name> | grep Depends` to show all the packages that are installed when executing the `sudo apt-get install <packge_name>` command. Then you can use that output to download all `.deb` files to create an archive that can be used as the application package.
+
+1. Example, to create a VM application package to install PowerShell for Ubuntu, first run the following commands to enable the repository where PowerShell can be downloaded from and also to identify the package dependencies on a new Ubuntu VM.
+
+```bash
+# Download the Microsoft repository GPG keys
+wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb"
+# Register the Microsoft repository GPG keys
+sudo dpkg -i packages-microsoft-prod.deb
+sudo rm -rf packages-microsoft-prod.deb
+sudo apt update
+sudo apt show powershell | grep Depends
+```
+
+2. Check the output of the line **Depends** which lists the following packages:
+
+```output
+Depends: libc6, libgcc1, libgssapi-krb5-2, libstdc++6, zlib1g, libicu72|libicu71|libicu70|libicu69|libicu68|libicu67|libicu66|libicu65|libicu63|libicu60|libicu57|libicu55|libicu52, libssl3|libssl1.1|libssl1.0.2|libssl1.
+```
+
+3. Download each of these files using `sudo apt-get download <package_name>` and create a tar compressed archive with all files.
-Figuring out the dependencies can be a bit tricky. There are third party tools that can show you the entire dependency tree.
+- Ubuntu 18.04:
-On Ubuntu, you can run `apt-get install <name> --simulate` to show all the packages that will be installed for the `apt-get install <name>` command. Then you can use that output to download all .deb files to create an archive that can be used as the application package. The downside to this method is that it doesn't show the dependencies that are already installed on the VM.
-
-Example, to create a VM application package to install PowerShell for Ubuntu, run the command `apt-get install powershell --simulate` on a new Ubuntu VM. Check the output of the line **The following NEW packages will be installed** which lists the following packages:
-- `liblttng-ust-ctl4` -- `liblttng-ust0` -- `liburcu6` -- `powershell`.
+```bash
+mkdir /tmp/powershell
+cd /tmp/powershell
+sudo apt-get download libc6
+sudo apt-get download libgcc1
+sudo apt-get download libgssapi-krb5-2
+sudo apt-get download libstdc++6
+sudo apt-get download zlib1g
+sudo apt-get download libssl1.1
+sudo apt-get download libicu60
+sudo apt-get download powershell
+sudo tar -cvzf powershell.tar.gz *.deb
+```
-Download these files using `apt-get download` and create a tar archive with all files at the root level. This tar archive will be the application package file. The install command in this case is:
+- Ubuntu 20.04:
```bash
-tar -xf powershell && dpkg -i ./liblttng-ust-ctl4_2.10.1-1_amd64.deb ./liburcu6_0.10.1-1ubuntu1_amd64.deb ./liblttng-ust0_2.10.1-1_amd64.deb ./powershell_7.1.2-1.ubuntu.18.04_amd64.deb
+mkdir /tmp/powershell
+cd /tmp/powershell
+sudo apt-get download libc6
+sudo apt-get download libgcc1
+sudo apt-get download libgssapi-krb5-2
+sudo apt-get download libstdc++6
+sudo apt-get download zlib1g
+sudo apt-get download libssl1.1
+sudo apt-get download libicu66
+sudo apt-get download powershell
+sudo tar -cvzf powershell.tar.gz *.deb
```
-And the remove command is:
+- Ubuntu 22.04:
```bash
-dpkg -r powershell && apt autoremove
+mkdir /tmp/powershell
+cd /tmp/powershell
+sudo apt-get download libc6
+sudo apt-get download libgcc1
+sudo apt-get download libgssapi-krb5-2
+sudo apt-get download libstdc++6
+sudo apt-get download zlib1g
+sudo apt-get download libssl3
+sudo apt-get download libicu70
+sudo apt-get download powershell
+sudo tar -cvzf powershell.tar.gz *.deb
```
-Use `apt autoremove` instead of explicitly trying to remove all the dependencies. You may have installed other applications with overlapping dependencies, and in that case, an explicit remove command would fail.
+4. This tar archive is the application package file.
+- The install command in this case is:
-In case you don't want to resolve the dependencies yourself and apt/rpm is able to connect to the repositories, you can install an application with just one .deb/.rpm file and let apt/rpm handle the dependencies.
+```bash
+sudo tar -xvzf powershell.tar.gz && sudo dpkg -i *.deb
+```
+
+- And the remove command is:
+
+```bash
+sudo apt remove powershell
+```
+
+Use `sudo apt autoremove` instead of explicitly trying to remove all the dependencies. You may have installed other applications with overlapping dependencies, and in that case, an explicit remove command would fail.
+
+In case you don't want to resolve the dependencies yourself, and `apt` is able to connect to the repositories, you can install an application with just one `.deb` file and let `apt` handle the dependencies.
Example install command: ```bash
-dpkg -i <appname> || apt --fix-broken install -y
+dpkg -i <package_name> || apt --fix-broken install -y
```
-
-## Tips for creating VM Applications on Windows
-Most third party applications in Windows are available as .exe or .msi installers. Some are also available as extract and run zip files. Let us look at the best practices for each of them.
+# [Red Hat](#tab/rhel)
+
+In Red Hat, you can run `sudo yum deplist <package_name>` to show all the packages that are installed when executing the `sudo yum install <package_name>` command. Then you can use that output to download all `.rpm` files to create an archive that can be used as the application package.
+
+1. Example, to create a VM application package to install PowerShell for Red Hat, first run the following commands to enable the repository where PowerShell can be downloaded from and also to identify the package dependencies on a new RHEL VM.
+
+- RHEL 7:
+
+```bash
+# Register the Microsoft RedHat repository
+curl https://packages.microsoft.com/config/rhel/7/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
+
+sudo yum deplist powershell
+```
+
+- RHEL 8:
+
+```bash
+# Register the Microsoft RedHat repository
+curl https://packages.microsoft.com/config/rhel/8/prod.repo | sudo tee /etc/yum.repos.d/microsoft.repo
+
+sudo dnf deplist powershell
+```
+
+2. Check the output of each of the dependency entries, the dependencies are named after `provider:`:
+
+```output
+ dependency: /bin/sh
+ provider: bash.x86_64 4.2.46-35.el7_9
+ dependency: libicu
+ provider: libicu.x86_64 50.2-4.el7_7
+ provider: libicu.i686 50.2-4.el7_7
+ dependency: openssl-libs
+ provider: openssl-libs.x86_64 1:1.0.2k-26.el7_9
+ provider: openssl-libs.i686 1:1.0.2k-26.el7_9
+```
+
+3. Download each of these files using `sudo yum install --downloadonly <package_name>`, to download a package when isn't yet installed in the system, or `sudo yum reinstall --downloadonly <package_name>`, to download a package that's already installed in the system, and create a tar compressed archive with all files.
-### .exe installer
+```bash
+mkdir /tmp/powershell
+cd /tmp/powershell
+sudo yum reinstall --downloadonly --downloaddir=/tmp/powershell bash
+sudo yum reinstall --downloadonly --downloaddir=/tmp/powershell libicu
+sudo yum reinstall --downloadonly --downloaddir=/tmp/powershell openssl-libs
+sudo yum install --downloadonly --downloaddir=/tmp/powershell powershell
+sudo tar -cvzf powershell.tar.gz *.rpm
+```
+
+4. This tar archive is the application package file.
+
+- The install command in this case is:
+
+```bash
+sudo tar -xvzf powershell.tar.gz && sudo yum install *.rpm -y
+```
+
+- And the remove command is:
+
+```bash
+sudo yum remove powershell
+```
+
+In case you don't want to resolve the dependencies yourself and yum/dnf is able to connect to the repositories, you can install an application with just one `.rpm` file and let yum/dnf handle the dependencies.
+
+Example install command:
+
+```bash
+yum install <package.rpm> -y
+```
+
+# [SUSE](#tab/sles)
+
+In SUSE, you can run `sudo zypper info --requires <package_name>` to show all the packages that are installed when executing the `sudo zypper install <package_name>` command. Then you can use that output to download all `.rpm` files to create an archive that can be used as the application package.
+
+1. Example, to create a VM application package to install `azure-cli` for SUSE, first run the following commands to enable the repository where Azure CLI can be downloaded from and also to identify the package dependencies on a new SUSE VM.
+
+```bash
+sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
+sudo zypper addrepo --name 'Azure CLI' --check https://packages.microsoft.com/yumrepos/azure-cli azure-cli
+sudo zypper info --requires azure-cli
+```
+
+2. Check the output after **Requires** which lists the following packages:
+
+```output
+Requires : [98]
+ /usr/bin/python3
+ python(abi) = 3.6
+ azure-cli-command-modules-nspkg >= 2.0
+ azure-cli-nspkg >= 3.0.3
+ python3-azure-loganalytics >= 0.1.0
+ python3-azure-mgmt-apimanagement >= 0.2.0
+ python3-azure-mgmt-authorization >= 0.61.0
+ python3-azure-mgmt-batch >= 9.0.0
+ python3-azure-mgmt-cognitiveservices >= 6.3.0
+ python3-azure-mgmt-containerservice >= 9.4.0
+ python3-azure-mgmt-cosmosdb >= 1.0.0
+ python3-azure-mgmt-datalake-store >= 0.5.0
+ python3-azure-mgmt-deploymentmanager >= 0.2.0
+ python3-azure-mgmt-imagebuilder >= 0.4.0
+ python3-azure-mgmt-iothubprovisioningservices >= 0.2.0
+ python3-azure-mgmt-maps >= 0.1.0
+ python3-azure-mgmt-media >= 2.1.0
+<truncated>
+...
+<truncated>
+ python3-vsts-cd-manager >= 1.0.2
+ python3-websocket-client >= 0.56.0
+ python3-xmltodict >= 0.12
+ python3-azure-mgmt-keyvault >= 8.0.0
+ python3-azure-mgmt-storage >= 16.0.0
+ python3-azure-mgmt-billing >= 1.0.0
+ python3-azure-mgmt-cdn >= 5.2.0
+ python3-azure-mgmt-hdinsight >= 2.0.0
+ python3-azure-mgmt-netapp >= 0.14.0
+ python3-azure-mgmt-synapse >= 0.5.0
+ azure-cli-core = 2.17.1
+ python3-azure-batch >= 10.0
+ python3-azure-mgmt-compute >= 18.0
+ python3-azure-mgmt-containerregistry >= 3.0.0rc16
+ python3-azure-mgmt-databoxedge >= 0.2.0
+ python3-azure-mgmt-network >= 17.0.0
+ python3-azure-mgmt-security >= 0.6.0
+```
+
+3. Download each of these files using `sudo zypper install -f --download-only <package_name>` and create a tar compressed archive with all files.
-Installer executables typically launch a user interface (UI) and require someone to select through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
+```bash
+mkdir /tmp/azurecli
+cd /tmp/azurecli
+for i in $(sudo zypper info --requires azure-cli | sed -n -e '/Requires*/,$p' | grep -v "Requires" | awk -F '[>=]' '{print $1}') ; do sudo zypper --non-interactive --pkg-cache-dir /tmp/azurecli install -f --download-only $i; done
+for i in $(sudo find /tmp/azurecli -name "*.rpm") ; do sudo cp $i /tmp/azurecli; done
+sudo tar -cvzf azurecli.tar.gz *.rpm
+```
-Cmd.exe also expects executable files to have the extension .exe, so you need to rename the file to have the .exe extension.
+4. This tar archive is the application package file.
-If I wanted to create a VM application package for myApp.exe, which ships as an executable, my VM Application is called 'myApp', so I write the command assuming that the application package is in the current directory:
+- The install command in this case is:
+```bash
+sudo tar -xvzf azurecli.tar.gz && sudo zypper --no-refresh --no-remote --non-interactive install *.rpm
+```
+
+- And the remove command is:
+
+```bash
+sudo zypper remove azure-cli
```+++
+## Tips for creating VM Applications on Windows
+
+Most third party applications in Windows are available as .exe or .msi installers. Some are also available as extract and run zip files. Let us look at the best practices for each of them.
+
+### .exe installer
+
+Installer executables typically launch a user interface (UI) and require someone to select through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
+
+Cmd.exe also expects executable files to have the extension `.exe`, so you need to rename the file to have the `.exe` extension.
+
+If I want to create a VM application package for `myApp.exe`, which ships as an executable, my VM Application is called 'myApp', so I write the command assuming the application package is in the current directory:
+
+```terminal
"move .\\myApp .\\myApp.exe & myApp.exe /S -config myApp_config" ```
-
+ If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know here the uninstaller is located. In the registry, the uninstall string is stored in `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\<installed application name>\UninstallString` so I would use the contents as my remove command:
-```
+```terminal
'\"C:\\Program Files\\myApp\\uninstall\\helper.exe\" /S' ```
-### .msi installer
+### .msi installer
For command line execution of `.msi` installers, the commands to install or remove an application should use `msiexec`. Typically, `msiexec` runs as its own separate process and `cmd` doesn't wait for it to complete, which can lead to problems when installing more than one VM application. The `start` command can be used with `msiexec` to ensure that the installation completes before the command returns. For example:
-```
+```terminal
start /wait %windir%\\system32\\msiexec.exe /i myapp /quiet /forcerestart /log myapp_install.log ``` Example remove command:
-```
+```terminal
start /wait %windir%\\system32\\msiexec.exe /x $appname /quiet /forcerestart /log ${appname}_uninstall.log ```
-### Zipped files
+### Zipped files
For .zip or other zipped files, rename and unzip the contents of the application package to the desired destination. Example install command:
-```
+```terminal
rename myapp myapp.zip && mkdir C:\myapp && powershell.exe -Command "Expand-Archive -path myapp.zip -destinationpath C:\myapp" ``` Example remove command:
-```
+```terminal
rmdir /S /Q C:\\myapp ```+ ## Treat failure as deployment failure
-The VM application extension always returns a *success* regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. This is triggered by the "treat failure as deployment failure" flag which is set to `$false` by default and can be changed to `$true`. The failure flag can be configured in [PowerShell](/powershell/module/az.compute/add-azvmgalleryapplication#parameters) or [CLI](/cli/azure/vm/application#az-vm-application-set).
+The VM application extension always returns a *success* regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension only reports the extension status as failure when there's a problem with the extension or the underlying infrastructure. This behavior is triggered by the "treat failure as deployment failure" flag, which is set to `$false` by default and can be changed to `$true`. The failure flag can be configured in [PowerShell](/powershell/module/az.compute/add-azvmgalleryapplication#parameters) or [CLI](/cli/azure/vm/application#az-vm-application-set).
## Troubleshooting VM Applications
$resultSummary | convertto-json -depth 5
| More than one VM Application was specified with the same packageReferenceId. | The same application was specified more than once. | | Subscription not authorized to access this image. | The subscription doesn't have access to this application version. | | Storage account in the arguments doesn't exist. | There are no applications for this subscription. |
-| The platform image {image} isn't available. Verify that all fields in the storage profile are correct. For more details about storage profile information, please refer to https://aka.ms/storageprofile. | The application doesn't exist. |
-| The gallery image {image} is not available in {region} region. Please contact image owner to replicate to this region, or change your requested region. | The gallery application version exists, but it was not replicated to this region. |
-| The SAS is not valid for source uri {uri}. | A `Forbidden` error was received from storage when attempting to retrieve information about the url (either mediaLink or defaultConfigurationLink). |
+| The platform image {image} isn't available. Verify that all fields in the storage profile are correct. For more details about storage profile information, see https://aka.ms/storageprofile. | The application doesn't exist. |
+| The gallery image {image} isn't available in {region} region. Contact image owner to replicate to this region, or change your requested region. | The gallery application version exists, but it wasn't replicated to this region. |
+| The SAS isn't valid for source uri {uri}. | A `Forbidden` error was received from storage when attempting to retrieve information about the url (either mediaLink or defaultConfigurationLink). |
| The blob referenced by source uri {uri} doesn't exist. | The blob provided for the mediaLink or defaultConfigurationLink properties doesn't exist. |
-| The gallery application version url {url} cannot be accessed due to the following error: remote name not found. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | The most likely case is that a SAS uri with read privileges was not provided. |
-| The gallery application version url {url} cannot be accessed due to the following error: {error description}. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | There was an issue with the storage blob provided. The error description will provide more information. |
-| Operation {operationName} is not allowed on {application} since it is marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete). | Attempt to update an application thatΓÇÖs currently being deleted. |
-| The value {value} of parameter 'galleryApplicationVersion.properties.publishingProfile.replicaCount' is out of range. The value must be between 1 and 3, inclusive. | Only between 1 and 3 replicas are allowed for VM Application versions. |
-| Changing property 'galleryApplicationVersion.properties.publishingProfile.manageActions.install' is not allowed. (or update, delete) | It is not possible to change any of the manage actions on an existing VmApplication. A new VmApplication version must be created. |
-| Changing property ' galleryApplicationVersion.properties.publishingProfile.settings.packageFileName ' is not allowed. (or configFileName) | It is not possible to change any of the settings, such as the package file name or config file name. A new VmApplication version must be created. |
-| The blob referenced by source uri {uri} is too big: size = {size}. The maximum blob size allowed is '1 GB'. | The maximum size for a blob referred to by mediaLink or defaultConfigurationLink is currently 1 GB. |
+| The gallery application version url {url} can't be accessed due to the following error: remote name not found. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | The most likely case is that a SAS uri with read privileges wasn't provided. |
+| The gallery application version url {url} can't be accessed due to the following error: {error description}. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | There was an issue with the storage blob provided. The error description provides more information. |
+| Operation {operationName} isn't allowed on {application} since it's marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete). | Attempt to update an application that's currently being deleted. |
+| The value {value} of parameter 'galleryApplicationVersion.properties.publishingProfile.replicaCount' is out of range. The value must be between one and three, inclusive. | Only between one and three replicas are allowed for VM Application versions. |
+| Changing property 'galleryApplicationVersion.properties.publishingProfile.manageActions.install' isn't allowed. (or update, delete) | It isn't possible to change any of the manage actions on an existing VmApplication. A new VmApplication version must be created. |
+| Changing property ' galleryApplicationVersion.properties.publishingProfile.settings.packageFileName ' isn't allowed. (or configFileName) | It isn't possible to change any of the settings, such as the package file name or config file name. A new VmApplication version must be created. |
+| The blob referenced by source uri {uri} is too large: size = {size}. The maximum blob size allowed is '1 GB'. | The maximum size for a blob referred to by mediaLink or defaultConfigurationLink is currently 1 GB. |
| The blob referenced by source uri {uri} is empty. | An empty blob was referenced. |
-| {type} blob type is not supported for {operation} operation. Only page blobs and block blobs are supported. | VmApplications only supports page blobs and block blobs. |
-| The SAS is not valid for source uri {uri}. | The SAS uri supplied for mediaLink or defaultConfigurationLink is not a valid SAS uri. |
-| Cannot specify {region} in target regions because the subscription is missing required feature {featureName}. Either register your subscription with the required feature or remove the region from the target region list. | To use VmApplications in certain restricted regions, one must have the feature flag registered for that subscription. |
+| {type} blob type isn't supported for {operation} operation. Only page blobs and block blobs are supported. | VmApplications only supports page blobs and block blobs. |
+| The SAS isn't valid for source uri {uri}. | The SAS uri supplied for mediaLink or defaultConfigurationLink isn't a valid SAS uri. |
+| Can't specify {region} in target regions because the subscription is missing required feature {featureName}. Either register your subscription with the required feature or remove the region from the target region list. | To use VmApplications in certain restricted regions, one must have the feature flag registered for that subscription. |
| Gallery image version publishing profile regions {regions} must contain the location of image version {location}. | The list of regions for replication must contain the location where the application version is. |
-| Duplicate regions are not allowed in target publishing regions. | The publishing regions may not have duplicates. |
-| Gallery application version resources currently do not support encryption. | The encryption property for target regions is not supported for VM Applications |
+| Duplicate regions aren't allowed in target publishing regions. | The publishing regions may not have duplicates. |
+| Gallery application version resources currently don't support encryption. | The encryption property for target regions isn't supported for VM Applications |
| Entity name doesn't match the name in the request URL. | The gallery application version specified in the request url doesn't match the one specified in the request body. |
-| The gallery application version name is invalid. The application version name should follow Major(int32).Minor(int32).Patch(int32) format, where int is between 0 and 2,147,483,647 (inclusive). e.g. 1.0.0, 2018.12.1 etc. | The gallery application version must follow the format specified. |
--
+| The gallery application version name is invalid. The application version name should follow Major(int32). Minor(int32). Patch(int32) format, where `int` is between 0 and 2,147,483,647 (inclusive). for example, 1.0.0, 2018.12.1 etc. | The gallery application version must follow the format specified. |
## Next steps -- Learn how to [create and deploy VM application packages](vm-applications-how-to.md).
+- Learn how to [create and deploy VM application packages](vm-applications-how-to.md).
virtual-machines Windows In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows-in-place-upgrade.md
# In-place upgrade for VMs running Windows Server in Azure
-An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article will teach you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade. Currently, upgrading to Windows Server 2019 and Windows Server 2022 is supported.
+An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article teaches you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade. Currently, upgrading to Windows Server 2016, Windows Server 2019 and Windows Server 2022 are supported.
Before you begin an in-place upgrade: - Review the upgrade requirements for the target operating system:
+ - Upgrade options for Windows Server 2016 from Windows Server 2012 or Windows Server 2012 R2
+
- Upgrade options for Windows Server 2019 from Windows Server 2012 R2 or Windows Server 2016 - Upgrade options for Windows Server 2022 from Windows Server 2016 or Windows Server 2019
Before you begin an in-place upgrade:
## Windows versions not yet supported for in-place upgrade For the following versions, consider using the [workaround](#workaround) later in this article: -- Windows Server 2012 Datacenter-- Windows Server 2012 Standard - Windows Server 2008 R2 Datacenter - Windows Server 2008 R2 Standard
The in-place upgrade process requires the use of Managed Disks on the VM to be u
## Create snapshot of the operating system disk
-We recommend that you create a snapshot of your operating system disk and any data disks before starting the in-place upgrade process. This will enable you to revert to the previous state of the VM if anything fails during the in-place upgrade process. To create a snapshot on each disk, follow these steps to [create a snapshot of a disk](./snapshot-copy-managed-disk.md).
+We recommend that you create a snapshot of your operating system disk and any data disks before starting the in-place upgrade process. This enables you to revert to the previous state of the VM if anything fails during the in-place upgrade process. To create a snapshot on each disk, follow these steps to [create a snapshot of a disk](./snapshot-copy-managed-disk.md).
## Create upgrade media disk
To start an in-place upgrade the upgrade media must be attached to the VM as a M
| Parameter | Definition | |||
-| resourceGroup | Name of the resource group where the upgrade media Managed Disk will be created. The named resource group will be created if it doesn't exist. |
-| location | Azure region where the upgrade media Managed Disk will be created. This must be the same region as the VM to be upgraded. |
+| resourceGroup | Name of the resource group where the upgrade media Managed Disk will be created. The named resource group is created if it doesn't exist. |
+| location | Azure region where the upgrade media Managed Disk is created. This must be the same region as the VM to be upgraded. |
| zone | Azure zone in the selected region where the upgrade media Managed Disk will be created. This must be the same zone as the VM to be upgraded. For regional VMs (non-zonal) the zone parameter should be "". | | diskName | Name of the Managed Disk that will contain the upgrade media |
-| sku | Windows Server upgrade media version. This must be either: `server2022Upgrade` or `server2019Upgrade` |
+| sku | Windows Server upgrade media version. This must be either: `server2016Upgrade` or `server2019Upgrade` or `server2022Upgrade` |
### PowerShell script
To initiate the in-place upgrade the VM must be in the `Running` state. Once the
.\setup.exe /auto upgrade /dynamicupdate disable ```
-1. Select the correct "Upgrade to" image based on the current version and configuration of the VM using the following table:
-
-| Upgrade from | Upgrade to |
-|||
-| Windows Server 2012 R2 (Core) | Windows Server 2019 |
-| Windows Server 2012 R2 | Windows Server 2019 (Desktop Experience) |
-| Windows Server 2016 (Core) | Windows Server 2019 -or- Windows Server 2022 |
-| Windows Server 2016 (Desktop Experience) | Windows Server 2019 (Desktop Experience) -or- Windows Server 2022 (Desktop Experience) |
-| Windows Server 2019 (Core) | Windows Server 2022 |
-| Windows Server 2019 (Desktop Experience) | Windows Server 2022 (Desktop Experience) |
--
-
+1. Select the correct "Upgrade to" image based on the current version and configuration of the VM using the [Windows Server upgrade matrix](/windows-server/get-started/upgrade-overview).
During the upgrade process the VM will automatically disconnect from the RDP session. After the VM is disconnected from the RDP session the progress of the upgrade can be monitored through the [screenshot functionality available in the Azure portal](/troubleshoot/azure/virtual-machines/boot-diagnostics#enable-boot-diagnostics-on-existing-virtual-machine).
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md
Previously updated : 02/27/2023 Last updated : 04/13/2023
For basic specs, storage capacities, and disk details, see [GPU Windows VM sizes
| OS | Driver | | -- |- |
-| Windows 11 64-bit 21H2<br/><br/>Windows 10 64-bit 21H1, 21H2, 20H2 (RSX not supported on Win10 20H2)<br/><br/>Windows 11 EMS 64-bit 21H2<br/><br/> Windows 10 EMS 64-bit 20H2, 21H2, 21H1(RSX not supported on EMS)<br/><br/>Windows Server 2016<br/><br/>Windows Server 2019 | [22.Q2-1]( https://download.microsoft.com/download/4/1/2/412559d0-4de5-4fb1-aa27-eaa3873e1f81/AMD-Azure-NVv4-Driver-22Q2.exe) (.exe) |
+| Windows 11 64-bit 21H2<br/><br/>Windows 10 64-bit 21H1, 21H2, 20H2 (RSX not supported on Win10 20H2)<br/><br/>Windows 11 EMS 64-bit 21H2<br/><br/> Windows 10 EMS 64-bit 20H2, 21H2, 21H1(RSX not supported on EMS)<br/><br/>Windows Server 2016<br/><br/>Windows Server 2019 | [22.Q2-2]( https://download.microsoft.com/download/4/1/2/412559d0-4de5-4fb1-aa27-eaa3873e1f81/AMD-Azure-NVv4-Driver-22Q2.exe) (.exe) |
Previous supported driver versions for Windows builds up to 1909 are [20.Q4-1](https://download.microsoft.com/download/0/e/6/0e611412-093f-40b8-8bf9-794a1623b2be/AMD-Azure-NVv4-Driver-20Q4-1.exe) (.exe) and [21.Q2-1](https://download.microsoft.com/download/4/e/-Azure-NVv4-Driver-21Q2-1.exe) (.exe)
virtual-machines Scheduled Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md
The Scheduled Events service is versioned. Versions are mandatory; the current v
> Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future. ### Enabling and disabling Scheduled Events
-Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes.
-
-Scheduled Events is disabled for your service if it doesn't make a request for 24 hours.
+Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events is disabled for your service if it doesn't make a request to the endpoint for 24 hours.
### User-initiated maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance. If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This will prevent delays in recovering your application back to a good state.
+
+Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile).
## Use the API
Each event is scheduled a minimum amount of time in the future based on the even
| Freeze| 15 minutes | | Reboot | 15 minutes | | Redeploy | 10 minutes |
-| Preempt | 30 seconds |
| Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes | Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array, and the impact will not occur as previously scheduled.
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/tutorial-automate-vm-deployment.md
# Tutorial - Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension
-**Applies to:** :heavy_check_mark: Window :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Windows :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
To configure virtual machines (VMs) in a quick and consistent manner, you can use the [Custom Script Extension for Windows](../extensions/custom-script-windows.md). In this tutorial you learn how to:
virtual-machines Oracle Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-design.md
# Design and implement an Oracle database in Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
-Azure is home for all Oracle workloads, including those which need to continue to run optimally in Azure with Oracle. If you have the [Diagnostic Pack](https://www.oracle.com/technetwork/database/enterprise-edition/overview/diagnostic-pack-11g-datasheet-1-129197.pdf) or the [Automatic Workload Repository](https://docs.oracle.com/en-us/iaas/operations-insights/doc/analyze-automatic-workload-repository-awr-performance-data.html) you can use this data to assess the Oracle workload, size the resource needs, and migrate it to Azure. The various metrics provided by Oracle in these reports can provide a baseline understanding of application performance and platform utilization.
+Azure is home for all Oracle workloads, including workloads that need to continue to run optimally in Azure with Oracle. If you have the [Oracle Diagnostic Pack](https://www.oracle.com/technetwork/database/enterprise-edition/overview/diagnostic-pack-11g-datasheet-1-129197.pdf) or the [Automatic Workload Repository](https://docs.oracle.com/en-us/iaas/operations-insights/doc/analyze-automatic-workload-repository-awr-performance-data.html) (AWR), you can gather data about your workloads. Use this data to assess the Oracle workload, size the resource needs, and migrate the workload to Azure. The various metrics provided by Oracle in these reports can provide an understanding of application performance and platform usage.
-This article will help you to understand how to size out an Oracle workload to run in Azure and explore the best architecture solutions to provide the most optimal cloud performance. The data provided by Oracle in the Statspack and even more so in its descendent, the AWR, will assist you in developing clear expectations about the limits of physical tuning through architecture, the advantages of logical tuning of database code, and the overall database design.
+This article helps you to prepare an Oracle workload to run in Azure and explore the best architecture solutions to provide optimal cloud performance. The data provided by Oracle in the Statspack and even more so in its descendent, the AWR, assists you in developing clear expectations. These expectations include the limits of physical tuning through architecture, the advantages of logical tuning of database code, and the overall database design.
-## Differences between the two environments
+## Differences between the two environments
-When you're migrating on-premises applications to Azure, keep in mind a few important differences between the two environments.
+When you're migrating on-premises applications to Azure, keep in mind a few important differences between the two environments.
-One important difference is that in an Azure implementation, resources such as VMs, disks, and virtual networks are shared among other clients. In addition, resources can be throttled based on the requirements. Instead of focusing on avoiding failing (sometimes referred to as *mean time between failures*, or MTBF), Azure is more focused on surviving the failure (sometimes referred to as *mean time to recovery*, or MTTR).
+One important difference is that in an Azure implementation, resources such as VMs, disks, and virtual networks are shared among other clients. In addition, resources can be throttled based on the requirements. Instead of focusing on avoiding failing, Azure focuses more on surviving the failure. The first approach tries to increase *mean time between failures* (MTBF) and the second tries to decrease *mean time to recovery* (MTTR).
The following table lists some of the differences between an on-premises implementation and an Azure implementation of an Oracle database. | | On-premises implementation | Azure implementation |
-| | | |
-| **Networking** |LAN/WAN |SDN (software-defined networking)|
-| **Security group** |IP/port restriction tools |[Network security group (NSG)](https://azure.microsoft.com/blog/network-security-groups) |
-| **Resilience** |MTBF |MTTR |
-| **Planned maintenance** |Patching/upgrades|[Availability sets](/previous-versions/azure/virtual-machines/windows/infrastructure-example) (patching/upgrades managed by Azure) |
-| **Resource** |Dedicated |Shared with other clients|
-| **Regions** |Datacenters |[Region pairs](../../regions.md#region-pairs)|
-| **Storage** |SAN/physical disks |[Azure-managed storage](https://azure.microsoft.com/pricing/details/managed-disks/?v=17.23h)|
-| **Scale** |Vertical scale |Horizontal scale|
+|: |: |: |
+| **Networking** |LAN/WAN | Software-defined networking (SDN) |
+| **Security group** | IP/port restriction tools | [Network security group (NSG)](https://azure.microsoft.com/blog/network-security-groups) |
+| **Resilience** | MTBF | MTTR |
+| **Planned maintenance** | Patching/upgrades| [Availability sets](/previous-versions/azure/virtual-machines/windows/infrastructure-example) with patching/upgrades managed by Azure |
+| **Resource** | Dedicated | Shared with other clients |
+| **Regions** | Datacenters | [Region pairs](../../regions.md#region-pairs) |
+| **Storage** | SAN/physical disks | [Azure-managed storage](../../managed-disks-overview.md) |
+| **Scale** | Vertical scale | Horizontal scale |
### Requirements
-It's a good idea to consider the following requirements before you start your migration:
+Consider the following requirements before you start your migration:
-- Determine the real CPU usage. Oracle is licensed by core, which means that sizing the vCPU needs can be an essential exercise to help you reduce costs.
+- Determine the real CPU usage. Oracle licenses by core, which means that sizing your vCPU needs can be essential to help you reduce costs.
- Determine the database size, backup storage, and growth rate.-- Determine the I/O requirements, which you can estimate based on Oracle Statspack and Automatic Workload Repository (AWR) reports. You can also estimate the requirements from storage monitoring tools available from the operating system.
+- Determine the I/O requirements, which you can estimate based on Oracle Statspack and the AWR reports. You can also estimate the requirements from storage monitoring tools available from the operating system.
## Configuration options
-It's a good idea to generate an AWR report and obtain some metrics from it that help you make decisions about configuration. Then, there are four potential areas that you can tune to improve performance in an Azure environment:
+It's a good idea to generate an AWR report and obtain some metrics from it to help you make decisions about configuration. Then, there are four potential areas that you can tune to improve performance in an Azure environment:
- Virtual machine size - Network throughput
It's a good idea to generate an AWR report and obtain some metrics from it that
### Generate an AWR report
-If you have an existing an Oracle Enterprise Edition database and are planning to migrate to Azure, you have several options. If you have the [Diagnostics Pack](https://www.oracle.com/technetwork/oem/pdf/511880.pdf) for your Oracle instances, you can run the Oracle AWR report to get the metrics (such as IOPS, Mbps, and GiBs). For those databases without the Diagnostics Pack license, or for an Oracle Standard Edition database, you can collect the same important metrics with a Statspack report after manual snapshots have been collected. The main differences between these two reporting methods are that AWR is automatically collected, and that it provides more information about the database than does Statspack.
+If you have an existing an Oracle Enterprise Edition database and are planning to migrate to Azure, you have several options. If you have the [Diagnostics Pack](https://www.oracle.com/technetwork/oem/pdf/511880.pdf) for your Oracle instances, you can run the Oracle AWR report to get the metrics, such as IOPS, Mbps, and GiBs. For those databases without the Diagnostics Pack license, or for an Oracle Standard Edition database, you can collect the same important metrics with a Statspack report after you collect manual snapshots. The main differences between these two reporting methods are that AWR is automatically collected, and that it provides more information about the database than does Statspack.
-You might consider running your AWR report during both regular and peak workloads, so you can compare. To collect the more accurate workload, consider an extended window report of one week, as opposed to one day. AWR does provide averages as part of its calculations in the report.
+Consider running your AWR report during both regular and peak workloads, so you can compare. To collect the more accurate workload, consider an extended window report of one week, as opposed to one day. AWR provides averages as part of its calculations in the report. By default, the AWR repository retains eight days of data and takes snapshots at hourly intervals.
-For a datacenter migration, it's a good idea to gather reports for sizing on the production systems. Estimate remaining database copies used for user testing, test, and development by percentages (for example, 50 percent of production sizing).
+For a datacenter migration, you should gather reports for sizing on the production systems. Estimate remaining database copies used for user testing, test, and development by percentages. For example, estimate 50 percent of production sizing.
-By default, the AWR repository retains 8 days of data and takes snapshots at hourly intervals. To run an AWR report from the command line, use the following command:
+To run an AWR report from the command line, use the following command:
```bash
-$ sqlplus / as sysdba
-SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql;
+sqlplus / as sysdba
+@$ORACLE_HOME/rdbms/admin/awrrpt.sql;
``` ### Key metrics
The report prompts you for the following information:
- The number of days of snapshots to display. For example, for one-hour intervals, a one-week report produces 168 snapshot IDs. - The beginning `SnapshotID` for the report window. - The ending `SnapshotID` for the report window.-- The name of the report to be created by the AWR script.
+- The name of the report that the AWR script creates.
-If you're running the AWR report on a Real Application Cluster (RAC), the command-line report is the *awrgrpt.sql* file, instead of *awrrpt.sql*. The `g` report creates a report for all nodes in the RAC database, in a single report. This report eliminates the need to run one report on each RAC node.
+If you're running the AWR report on a Real Application Cluster (RAC), the command-line report is the *awrgrpt.sql* file, instead of *awrrpt.sql*. The `g` report creates a report for all nodes in the RAC database in a single report. This report eliminates the need to run one report on each RAC node.
You can obtain the following metrics from the AWR report: - Database name, instance name, and host name-- Database version (supportability by Oracle)
+- Database version for supportability by Oracle
- CPU/Cores-- SGA/PGA (and advisors to let you know if undersized)
+- SGA/PGA, and advisors to let you know if undersized
- Total memory in GB - CPU percentage busy - DB CPUs
You can obtain the following metrics from the AWR report:
- MBPs (read/write) - Network throughput - Network latency rate (low/high)-- Top wait events
+- Top wait events
- Parameter settings for database-- Is the database RAC, Exadata, or using advanced features or configurations
+- Whether the database is RAC, Exadata, or using advanced features or configurations
### Virtual machine size
Here are some steps you can take to configure virtual machine size for optimal p
#### Estimate VM size based on CPU, memory, and I/O usage from the AWR report
-Look at the top five timed foreground events that indicate where the system bottlenecks are. For example, in the following diagram, the log file sync is at the top. It indicates the number of waits that are required before the log writer writes the log buffer to the redo log file. These results indicate that better performing storage or disks are required. In addition, the diagram also shows the number of CPU (cores) and the amount of memory.
+Look at the top five timed foreground events that indicate where the system bottlenecks are. For example, in the following diagram, the log file sync is at the top. It indicates the number of waits that is required before the log writer writes the log buffer to the redo log file. These results indicate that better performing storage or disks are required. In addition, the diagram also shows the number of CPU cores and the amount of memory.
-![Screenshot that shows the log file sync at the top of the table.](./media/oracle-design/cpu_memory_info.png)
The following diagram shows the total I/O of read and write. There were 59 GB read and 247.3 GB written during the time of the report.
-![Screenshot that shows the total I/O of read and write.](./media/oracle-design/io_info.png)
#### Choose a VM
-Based on the information that you collected from the AWR report, the next step is to choose a VM of a similar size that meets your requirements. You can find a list of available VMs in [Memory optimized](../../sizes-memory.md).
+Based on the information that you collected from the AWR report, the next step is to choose a VM of a similar size that meets your requirements. For more information about available VMs, see [Memory optimized virtual machine sizes](../../sizes-memory.md).
#### Fine-tune the VM sizing with a similar VM series based on the ACU
-After you've chosen the VM, pay attention to the Azure compute unit (ACU) for the VM. You might choose a different VM based on the ACU value that better suits your requirements. For more information, see [Azure compute unit](../../acu.md).
+After you choose the VM, pay attention to the Azure compute unit (ACU) for the VM. You might choose a different VM based on the ACU value that better suits your requirements. For more information, see [Azure compute unit](../../acu.md).
-![Screenshot of the ACU units page.](./media/oracle-design/acu_units.png)
### Network throughput The following diagram shows the relation between throughput and IOPS:
-![Diagram that shows the relationship between throughput and IOPS.](./media/oracle-design/throughput.png)
The total network throughput is estimated based on the following information: - SQL*Net traffic-- MBps x the number of servers (outbound stream, such as Oracle Data Guard)
+- MBps times the number of servers (outbound stream, such as Oracle Data Guard)
- Other factors, such as application replication
-![Screenshot of the SQL*Net throughput.](./media/oracle-design/sqlnet_info.png)
-Based on your network bandwidth requirements, there are various gateway types for you to choose from. These include basic, VpnGw, and Azure ExpressRoute. For more information, see the [VPN gateway pricing page](https://azure.microsoft.com/pricing/details/vpn-gateway/?v=17.23h).
+Based on your network bandwidth requirements, there are various gateway types for you to choose from. These types include basic, VpnGw, and Azure ExpressRoute. For more information, see [VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/?v=17.23h).
#### Recommendations - Network latency is higher compared to an on-premises deployment. Reducing network round trips can greatly improve performance.-- To reduce round-trips, consolidate applications that have high transactions or ΓÇ£chattyΓÇ¥ apps on the same virtual machine.
+- To reduce round-trips, consolidate applications that have high transactions or *chatty* apps on the same virtual machine.
- Use virtual machines with [accelerated networking](../../../virtual-network/create-vm-accelerated-networking-cli.md) for better network performance. - For certain Linux distributions, consider enabling [TRIM/UNMAP support](/previous-versions/azure/virtual-machines/linux/configure-lvm#trimunmap-support). - Install [Oracle Enterprise Manager](https://www.oracle.com/technetwork/oem/enterprise-manager/overview/https://docsupdatetracker.net/index.html) on a separate virtual machine.-- Huge pages are not enabled on Linux by default. Consider enabling huge pages, and set `use_large_pages = ONLY` on the Oracle DB. This might help increase performance. For more information, see [USE_LARGE_PAGES](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/USE_LARGE_PAGES.html#GUID-1B0F4D27-8222-439E-A01D-E50758C88390).
+- Huge pages aren't enabled on Linux by default. Consider enabling huge pages, and set `use_large_pages = ONLY` on the Oracle DB. This approach might help increase performance. For more information, see [USE_LARGE_PAGES](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/USE_LARGE_PAGES.html#GUID-1B0F4D27-8222-439E-A01D-E50758C88390).
### Disk types and configurations
-Here are some tips for you as you consider disks.
+Here are some tips as you consider disks.
- **Default OS disks:** These disk types offer persistent data and caching. They're optimized for operating system access at startup, and aren't designed for either transactional or data warehouse (analytical) workloads. -- **Managed disks:** Azure manages the storage accounts that you use for your VM disks. You specify the disk type (most often, this is premium SSD for Oracle workloads), and the size of the disk that you need. Azure creates and manages the disk for you. A premium SSD-managed disk is only available for memory-optimized and specifically designed VM series. After you choose a particular VM size, the menu shows only the available premium storage SKUs that are based on that VM size.
+- **Managed disks:** Azure manages the storage accounts that you use for your VM disks. You specify the disk type and the size of the disk that you need. The type is most often Premium (SSD) for Oracle workloads. Azure creates and manages the disk for you. A premium SSD-managed disk is only available for memory-optimized and designed VM series. After you choose a particular VM size, the menu shows only the available premium storage SKUs that are based on that VM size.
- ![Screenshot of the managed disk page.](./media/oracle-design/premium_disk01.png)
+ :::image type="content" source="./media/oracle-design/premium_disk01.png" alt-text="Screenshot of the managed disk page." lightbox="./media/oracle-design/premium_disk01.png":::
-After you configure your storage on a VM, you might want to load test the disks before creating a database. Knowing the I/O rate in terms of both latency and throughput can help you determine if the VMs support the expected throughput with latency targets. There are a number of tools for application load testing, such as Oracle Orion, Sysbench, SLOB, and Fio.
+After you configure your storage on a VM, you might want to load test the disks before you create a database. Knowing the I/O rate in terms of both latency and throughput can help you determine if the VMs support the expected throughput with latency targets. There are several tools for application load testing, such as Oracle Orion, Sysbench, SLOB, and Fio.
-Run the load test again after you've deployed an Oracle database. Start your regular and peak workloads, and the results show you the baseline of your environment. Be realistic in the workload test. It doesn't make sense to run a workload that is nothing like what you will run on the VM in reality.
+Run the load test again after you deploy an Oracle database. Start your regular and peak workloads, and the results show you the baseline of your environment. Be realistic in the workload test. It doesn't make sense to run a workload that is nothing like what you run on the VM in reality.
-Because Oracle can be an I/O intensive database, it's quite important to size the storage based on the IOPS rate rather than the storage size. For example, if the required IOPS is 5,000, but you only need 200 GB, you might still get the P30 class premium disk even though it comes with more than 200 GB of storage.
+Because Oracle can be an I/O intensive database, it's important to size the storage based on the IOPS rate rather than the storage size. For example, if the required IOPS value is 5,000, but you only need 200 GB, you might still get the P30 class premium disk even though it comes with more than 200 GB of storage.
-You can get the IOPS rate from the AWR report. It's determined by the redo log, physical reads, and writes rate. Always verify that the VM series you choose has the ability to handle the I/O demand of the workload, too. If the VM has a lower I/O limit than the storage, the limit maximum will be set by the VM.
+You can get the IOPS rate from the AWR report. The redo log, physical reads, and writes rate determine the IOPS rate. Always verify that the VM series you choose has the ability to handle the I/O demand of the workload. If the VM has a lower I/O limit than the storage, the VM sets the limit maximum.
-![Screenshot of the AWR report page.](./media/oracle-design/awr_report.png)
For example, the redo size is 12,200,000 bytes per second, which is equal to 11.63 MBPs.
-The IOPS is 12,200,000 / 2,358 = 5,174.
+The IOPS value is 12,200,000 / 2,358 = 5,174.
After you have a clear picture of the I/O requirements, you can choose a combination of drives that are best suited to meet those requirements.
-#### Recommendations
+#### Disk type recommendations
-- For data tablespace, spread the I/O workload across a number of disks by using managed storage or Oracle ASM.-- Use Oracle advanced compression to reduce I/O (for both data and indexes).
+- For data tablespace, spread the I/O workload across several disks by using managed storage or Oracle Automatic Storage Management (ASM).
+- Use Oracle advanced compression to reduce I/O for both data and indexes.
- Separate redo logs, temp, and undo tablespaces on separate data disks. - Don't put any application files on default operating system disks. These disks aren't optimized for fast VM boot times, and they might not provide good performance for your application. - When you're using M-Series VMs on premium storage, enable [write accelerator](../../how-to-enable-write-accelerator.md) on the redo logs disk.
After you have a clear picture of the I/O requirements, you can choose a combina
Although you have three options for host caching, only read-only caching is recommended for a database workload on an Oracle database. Read/write can introduce significant vulnerabilities to a data file, because the goal of a database write is to record it to the data file, not to cache the information. With read-only, all requests are cached for future reads. All writes continue to be written to disk.
-#### Recommendations
+#### Disk cache recommendations
To maximize throughput, start with read-only for host caching whenever possible. For premium storage, keep in mind that you must disable the barriers when you mount the file system with the read-only options. Update the */etc/fstab* file with the universally unique identifier to the disks.
-![Screenshot of the managed disk page that shows the read-only option.](./media/oracle-design/premium_disk02.png)
- For operating system disks, use premium SSD with read-write host caching. - For data disks that contain the following, use premium SSD with read-only host caching: Oracle data files, temp files, control files, block change tracking files, BFILEs, files for external tables, and flashback logs.-- For data disks that contain Oracle online redo log files, use premium SSD or UltraDisk with no host caching (the **None** option). Oracle redo log files that are archived, and Oracle Recovery Manager backup sets, can also reside with the online redo log files. Note that host caching is limited to 4095 GiB, so don't allocate a premium SSD larger than P50 with host caching. If you need more than 4 TiB of storage, stripe several premium SSDs with RAID-0, using Linux LVM2 or by using Oracle Automatic Storage Management.
+- For data disks that contain Oracle online redo log files, use premium SSD or UltraDisk with no host caching, the **None** option. Oracle redo log files that are archived and Oracle Recovery Manager backup sets, can also reside with the online redo log files. Host caching is limited to 4095 GiB, so don't allocate a premium SSD larger than P50 with host caching. If you need more than 4 TiB of storage, stripe several premium SSDs with RAID-0. Use Linux LVM2 or Oracle Automatic Storage Management.
If workloads vary greatly between the day and evening, and the I/O workload can support it, P1-P20 premium SSD with bursting might provide the performance required during night-time batch loads or limited I/O demands. ## Security
-After you have set up and configured your Azure environment, you need to secure your network. Here are some recommendations:
+After you set up and configure your Azure environment, you need to secure your network. Here are some recommendations:
- **NSG policy:** You can define your NSG by a subnet or a network interface card. It's simpler to control access at the subnet level, both for security and for force-routing application firewalls. - **Jumpbox:** For more secure access, administrators shouldn't directly connect to the application service or database. Use a jumpbox between the administrator machine and Azure resources.
-![Diagram that shows the jumpbox topology.](./media/oracle-design/jumpbox.png)
- The administrator machine should only offer IP-restricted access to the jumpbox. The jumpbox should have access to the application and database.
+ :::image type="content" source="./media/oracle-design/jumpbox.png" alt-text="Diagram that shows the jumpbox topology." lightbox="./media/oracle-design/jumpbox.png":::
-- **Private network (subnets):** It's a good idea to have the application service and database on separate subnets, so that NSG policy can set better control.
+ The administrator machine should only offer IP-restricted access to the jumpbox. The jumpbox should have access to the application and database.
+- **Private network (subnets):** It's a good idea to have the application service and database on separate subnets, so that NSG policy can set better control.
-## Additional reading
+## Resources
- [Configure Oracle ASM](configure-oracle-asm.md) - [Configure Oracle Data Guard](configure-oracle-dataguard.md)-- [Configure Oracle Golden Gate](configure-oracle-golden-gate.md)
+- [Configure Oracle GoldenGate](configure-oracle-golden-gate.md)
- [Oracle backup and recovery](./oracle-overview.md) ## Next steps -- [Tutorial: Create highly available VMs](../../linux/create-cli-complete.md)
+- [Create a complete Linux virtual machine with the Azure CLI](../../linux/create-cli-complete.md)
- [Explore VM deployment Azure CLI samples](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine)
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
Title: Reference architectures for Oracle databases on Azure | Microsoft Docs
-description: References architectures for running Oracle Database Enterprise Edition databases on Microsoft Azure Virtual Machines.
+description: Learn about reference architectures for running Oracle Database Enterprise Edition databases on Microsoft Azure Virtual Machines.
Previously updated : 12/13/2019 Last updated : 04/10/2023 # Reference architectures for Oracle Database Enterprise Edition on Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
-This guide details information on deploying a highly available Oracle database on Azure. In addition, this guide dives into disaster recovery considerations. These architectures have been created based on customer deployments. This guide only applies to Oracle Database Enterprise Edition.
+This article includes information on deploying a highly available Oracle database on Azure. In addition, this guide dives into disaster recovery considerations. These architectures have been created based on customer deployments. This guide only applies to Oracle Database Enterprise Edition.
-If you're interested in learning more about maximizing the performance of your Oracle database, see [Architect an Oracle DB](oracle-design.md).
+If you're interested in learning more about maximizing the performance of your Oracle database, see [Design and implement an Oracle database in Azure](oracle-design.md).
-## Assumptions
+## Prerequisites
-- You have an understanding of the different concepts of Azure such as [availability zones](../../../availability-zones/az-overview.md)-- You're running Oracle Database Enterprise Edition 12c or later-- You're aware of and acknowledge the licensing implications when using the solutions in this article
+- An understanding of the different concepts of Azure such as [availability zones](../../../availability-zones/az-overview.md)
+- Oracle Database Enterprise Edition 12c or later
+- Awareness of the licensing implications when using the solutions in this article
## High availability for Oracle databases
-Achieving high availability in the cloud is an important part of every organization's planning and design. Microsoft Azure offers [availability zones](../../../availability-zones/az-overview.md) and availability sets (to be used in regions where availability zones are unavailable). Read more about [managing availability of your virtual machines](../../availability.md) to design for the cloud.
+Achieving high availability in the cloud is an important part of every organization's planning and design. Azure offers [availability zones](../../../availability-zones/az-overview.md) and *availability sets* to be used in regions where availability zones are unavailable. For more information about how to design for the cloud, see [Availability options for Azure Virtual Machines](../../availability.md).
-In addition to cloud-native tools and offerings, Oracle provides solutions for high availability such as [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7), [Data Guard with FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/https://docsupdatetracker.net/index.html), [Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html), and [GoldenGate](https://www.oracle.com/middleware/technologies/goldengate.html) that can be set up on Azure. This guide covers reference architectures for each of these solutions.
+In addition to cloud-native tools and offerings, Oracle provides solutions for high availability that can be set up on Azure:
-Finally, when migrating or creating applications for the cloud, it's important to tweak your application code to add cloud-native patterns such as [retry pattern](/azure/architecture/patterns/retry) and [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker). Additional patterns defined in the [Cloud Design Patterns guide](/azure/architecture/patterns/) could help your application be more resilient.
+- [Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7)
+- [Data Guard with FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/https://docsupdatetracker.net/index.html)
+- [Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html)
+- [GoldenGate](https://www.oracle.com/middleware/technologies/goldengate.html)
+
+This guide covers reference architectures for each of these solutions.
+
+When you migrate or create applications for the cloud, we recommend using cloud-native patterns such as [retry pattern](/azure/architecture/patterns/retry) and [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker). For other patterns that could help make your application more resilient, see [Cloud Design Patterns guide](/azure/architecture/patterns/).
### Oracle RAC in the cloud
-Oracle Real Application Cluster (RAC) is a solution by Oracle to help customers achieve high throughputs by having many instances accessing one database storage (Shared-all architecture pattern). While Oracle RAC can also be used for high availability on-premises, Oracle RAC alone cannot be used for high availability in the cloud as it only protects against instance level failures and not against Rack-level or Data center-level failures. For this reason, Oracle recommends using Oracle Data Guard with your database (whether single instance or RAC) for high availability. Customers generally require a high SLA for running their mission critical applications. Oracle RAC is currently not certified or supported by Oracle on Azure. However, Azure offers features such as Availability Zones and planned maintenance windows to help protect against instance-level failures. In addition to this, customers can use technologies such as Oracle Data Guard, Oracle GoldenGate and Oracle Sharding for high performance and resiliency by protecting their databases from rack-level as well as datacenter-level and geo-political failures.
+Oracle Real Application Cluster (RAC) is a solution by Oracle to help customers achieve high throughputs by having many instances accessing one database storage. This pattern is a shared-all architecture. While Oracle RAC can be used for high availability on-premises, Oracle RAC alone can't be used for high availability in the cloud. Oracle RAC only protects against instance level failures and not against rack-level or datacenter-level failures. For this reason, Oracle recommends using Oracle Data Guard with your database, whether single instance or RAC, for high availability.
-When running Oracle Databases across multiple [availability zones](../../../availability-zones/az-overview.md) in conjunction with Oracle Data Guard or GoldenGate, customers are able to get an uptime SLA of 99.99%. In Azure regions where Availability zones are not yet present, customers can use [Availability Sets](../../availability-set-overview.md) and achieve an uptime SLA of 99.95%.
+Customers generally require a high SLA to run mission critical applications. Oracle doesn't currently certify or support Oracle RAC on Azure. However, Azure offers features such as availability zones and planned maintenance windows to help protect against instance-level failures. In addition to these offerings, you can use Oracle Data Guard, Oracle GoldenGate, and Oracle Sharding for high performance and resiliency. These technologies can help protect your databases from rack-level, datacenter-level, and geo-political failures.
->NOTE: You can have a uptime target that is much higher than the uptime SLA provided by Microsoft.
+When you run Oracle Databases on multiple [availability zones](../../../availability-zones/az-overview.md) with Oracle Data Guard or GoldenGate, you can get an uptime SLA of 99.99%. In Azure regions where availability zones aren't yet present, you can use [availability sets](../../availability-set-overview.md) and achieve an uptime SLA of 99.95%.
+
+> [!NOTE]
+> You can have a uptime target that is much higher than the uptime SLA provided by Microsoft.
## Disaster recovery for Oracle databases When hosting your mission-critical applications in the cloud, it's important to design for high availability and disaster recovery.
-For Oracle Database Enterprise Edition, Oracle Data Guard is a useful feature for disaster recovery. You can set up a standby database instance in a [paired Azure region](../../../availability-zones/cross-region-replication-azure.md) and set up Data Guard failover for disaster recovery. For zero data loss, it's recommended that you deploy an Oracle Data Guard Far Sync instance in addition to Active Data Guard.
+For Oracle Database Enterprise Edition, Oracle Data Guard is a useful feature for disaster recovery. You can set up a standby database instance in a [paired Azure region](../../../availability-zones/cross-region-replication-azure.md) and set up Data Guard failover for disaster recovery. For zero data loss, we recommend that you deploy an Oracle Data Guard Far Sync instance in addition to Active Data Guard.
-Consider setting up the Data Guard Far Sync instance in a different availability zone than your Oracle primary database if your application permits the latency (thorough testing is required). Use a **Maximum Availability** mode to set up synchronous transport of your redo files to the Far Sync instance. These files are then transferred asynchronously to the standby database.
+If your application permits the latency, consider setting up the Data Guard Far Sync instance in a different availability zone than your Oracle primary database. Test the configuration thoroughly. Use a *Maximum Availability* mode to set up synchronous transport of your redo files to the Far Sync instance. These files are then transferred asynchronously to the standby database.
-If your application doesn't allow for the performance loss when setting up Far Sync instance in another availability zone in **Maximum Availability** mode (synchronous), you may set up a Far Sync instance in the same availability zone as your primary database. For added availability, consider setting up multiple Far Sync instances close to your primary database and at least one instance close to your standby database (if the role transitions). Read more about Oracle Data Guard Far Sync in this [Oracle Active Data Guard Far Sync whitepaper](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf).
+Your application might not allow for the performance loss when setting up Far Sync instance in another availability zone in *Maximum Availability* mode (synchronous). If not, you might set up a Far Sync instance in the same availability zone as your primary database. For added availability, consider setting up multiple Far Sync instances close to your primary database and at least one instance close to your standby database, if the role transitions. For more information, see [Oracle Active Data Guard Far Sync](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf).
-When using Oracle Standard Edition databases, there are ISV solutions such as DBVisit Standby that allow you to set up high availability and disaster recovery.
+When you use Oracle Standard Edition databases, there are ISV solutions that allow you to set up high availability and disaster recovery, such as DBVisit Standby.
## Reference architectures ### Oracle Data Guard
-Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard maintains standby databases as transactionally consistent copies of the primary database. Depending on the distance between the primary and secondary databases and the application tolerance for latency, you can set up synchronous or asynchronous replication. Then, if the primary database is unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the primary role, minimizing downtime.
+Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard maintains standby databases as transactionally consistent copies of the primary database. Depending on the distance between the primary and secondary databases and the application tolerance for latency, you can set up synchronous or asynchronous replication. If the primary database is unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the primary role, minimizing downtime.
-When using Oracle Data Guard, you may also open your secondary database for read-only purposes. This configuration is called Active Data Guard. Oracle Database 12c introduced a feature called Data Guard Far Sync Instance. This instance allows you to set up a zero data loss configuration of your Oracle database without having to compromise on performance.
+When using Oracle Data Guard, you might also open your secondary database for read-only purposes. This configuration is called Active Data Guard. Oracle Database 12c introduced a feature called Data Guard Far Sync Instance. This instance allows you to set up a zero data loss configuration of your Oracle database without having to compromise on performance.
> [!NOTE]
-> Active Data Guard requires additional licensing. This license is also required to use the Far Sync feature. Please connect with your Oracle representative to discuss the licensing implications.
+> Active Data Guard requires additional licensing. This license is also required to use the Far Sync feature. Contact with your Oracle representative to discuss the licensing implications.
+
+#### Oracle Data Guard with Fast-Start Failover
-#### Oracle Data Guard with FSFO
-Oracle Data Guard with Fast-Start Failover (FSFO) can provide additional resiliency by setting up the broker on a separate machine. The Data Guard broker and the secondary database both run the observer and observe the primary database for downtime. This allows for redundancy in your Data Guard observer setup as well.
+Oracle Data Guard with Fast-Start Failover (FSFO) can provide more resiliency by setting up the broker on a separate machine. The Data Guard broker and the secondary database both run the observer and observe the primary database for downtime. This approach allows for redundancy in your Data Guard observer setup as well.
-With Oracle Database version 12.2 and above, it is also possible to configure multiple observers with a single Oracle Data Guard broker configuration. This setup provides additional availability, in case one observer and the secondary database experience downtime. Data Guard Broker is lightweight and can be hosted on a relatively small virtual machine. To learn more about Data Guard Broker and its advantages, visit the [Oracle documentation](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html) on this topic.
+With Oracle Database version 12.2 and above, it's also possible to configure multiple observers with a single Oracle Data Guard broker configuration. This setup provides extra availability, in case one observer and the secondary database experience downtime. Data Guard Broker is lightweight and can be hosted on a relatively small virtual machine. For more information about Data Guard Broker and its advantages, see [Oracle Data Guard Broker Concepts](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html).
The following diagram is a recommended architecture for using Oracle Data Guard on Azure with availability zones. This architecture allows you to get a VM uptime SLA of 99.99%.
-![Diagram that shows a recommended architecture for using Oracle Data Guard on Azure with availability zones.](./media/oracle-reference-architecture/oracledb_dg_fsfo_az.png)
-In the preceding diagram, the client system accesses a custom application with Oracle backend via the web. The web frontend is configured in a load balancer. The web frontend makes a call to the appropriate application server to handle the work. The application server queries the primary Oracle database. The Oracle database has been configured using a hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) to save on licensing costs and maximize performance. Multiple premium or ultra disks (Managed Disks) are used for performance and high availability.
+In the preceding diagram, the client system accesses a custom application with Oracle backend by using the web. The web frontend is configured in a load balancer. The web frontend makes a call to the appropriate application server to handle the work. The application server queries the primary Oracle database. The Oracle database has been configured using a hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) to save on licensing costs and maximize performance. Multiple premium or ultra disks (Managed Disks) are used for performance and high availability.
-The Oracle databases are placed in multiple availability zones for high availability. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, a minimum of three separate zones are set up in all enabled regions. The physical separation of availability zones within a region protects the data from data center failures. Additionally, two FSFO observers are set up across two availability zones to initiate and fail over the database to the secondary when an outage occurs.
+The Oracle databases are placed in multiple availability zones for high availability. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, a minimum of three separate zones are set up in all enabled regions. The physical separation of availability zones within a region protects the data from data center failures. Additionally, two FSFO observers are set up across two availability zones to initiate and fail over the database to the secondary when an outage occurs.
-You may set up additional observers and/or standby databases in a different availability zone (AZ 1, in this case) than the zone shown in the preceding architecture. Finally, Oracle databases are monitored for uptime and performance by Oracle Enterprise Manager (OEM). OEM also allows you to generate various performance and usage reports.
+You might set up other observers or standby databases in a different availability zone, AZ 1, in this case, than the zone shown in the preceding architecture. Finally, Oracle Enterprise Manager (OEM) monitors Oracle databases for uptime and performance. OEM also allows you to generate various performance and usage reports.
-In regions where availability zones aren't supported, you may use availability sets to deploy your Oracle Database in a highly available manner. Availability sets allow you to achieve a VM uptime of 99.95%. The following diagram is a reference architecture of this use:
+In regions where availability zones aren't supported, you might use availability sets to deploy your Oracle Database in a highly available manner. Availability sets allow you to achieve a VM uptime of 99.95%. The following diagram is a reference architecture of this use:
-![Oracle Database using availability sets with Data Guard Broker - FSFO](./media/oracle-reference-architecture/oracledb_dg_fsfo_as.png)
> [!NOTE]
-> * The Oracle Enterprise Manager VM need not be placed in an availability set as there is only one instances of OEM being deployed.
-> * Ultra disks aren't currently supported in an availability set configuration.
+>
+> - Because there is only one instance of OEM being deployed, you don't have to place the Oracle Enterprise Manager VM in an availability set.
+> - Ultra disks aren't currently supported in an availability set configuration.
#### Oracle Data Guard Far Sync
-Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or redo log files.
+Oracle Data Guard Far Sync provides zero data loss protection capability for Oracle Databases. This capability allows you to protect against data loss if your database machine fails. Oracle Data Guard Far Sync needs to be installed on a separate VM. Far Sync is a lightweight Oracle instance that only has a control file, password file, spfile, and standby logs. There are no data files or redo log files.
-For zero data loss protection, there must be synchronous communication between your primary database and the Far Sync instance. The Far Sync instance receives redo from the primary in a synchronous manner and forwards it immediately to all the standby databases in an asynchronous manner. This setup also reduces the overhead on the primary database, because it only has to send the redo to the Far Sync instance rather than all the standby databases. If a Far Sync instance fails, Data Guard automatically uses asynchronous transport to the secondary database from the primary database to maintain near-zero data loss protection. For added resiliency, customers may deploy multiple Far Sync instances per each database instance (primary and secondaries).
+For zero data loss protection, there must be synchronous communication between your primary database and the Far Sync instance. The Far Sync instance receives redo from the primary in a synchronous manner and forwards it immediately to all the standby databases in an asynchronous manner. This setup also reduces the overhead on the primary database, because it only has to send the redo to the Far Sync instance rather than all the standby databases. If a Far Sync instance fails, Data Guard automatically uses asynchronous transport to the secondary database from the primary database to maintain near-zero data loss protection. For added resiliency, customers might deploy multiple Far Sync instances per each database instance, including primary and secondaries.
The following diagram is a high availability architecture using Oracle Data Guard Far Sync:
-![Oracle database using availability zones with Data Guard Far Sync & Broker - FSFO](./media/oracle-reference-architecture/oracledb_dg_fs_az.png)
-In the preceding architecture, there is a Far Sync instance deployed in the same availability zone as the database instance to reduce the latency between the two. In cases where the application is latency sensitive, consider deploying your database and Far Sync instance or instances in a [proximity placement group](../../../virtual-machines/linux/proximity-placement-groups.md).
+In the preceding architecture, there's a Far Sync instance deployed in the same availability zone as the database instance to reduce the latency between the two. In cases where the application is latency sensitive, consider deploying your database and Far Sync instance or instances in a [proximity placement group](../../../virtual-machines/linux/proximity-placement-groups.md).
-The following diagram is an architecture utilizing Oracle Data Guard FSFO and Far Sync to achieve high availability and disaster recovery:
+The following diagram is an architecture that uses Oracle Data Guard FSFO and Far Sync to achieve high availability and disaster recovery:
-![Oracle Database using availability zones for disaster recovery with Data Guard Far Sync & Broker - FSFO](./media/oracle-reference-architecture/oracledb_dg_fs_az_dr.png)
### Oracle GoldenGate
-GoldenGate enables the exchange and manipulation of data at the transaction level among multiple, heterogeneous platforms across the enterprise. It moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure. Its modular architecture gives you the flexibility to extract and replicate selected data records, transactional changes, and changes to DDL (data definition language) across a variety of topologies.
+GoldenGate enables the exchange and manipulation of data at the transaction level among multiple, heterogeneous platforms across the enterprise. It moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure. Its modular architecture gives you the flexibility to extract and replicate selected data records, transactional changes, and changes to data definition language (DDL) across various topologies.
-Oracle GoldenGate allows you to configure your database for high availability by providing bidirectional replication. This allows you to set up a **multi-master** or **active-active configuration**. The following diagram is a recommended architecture for Oracle GoldenGate active-active setup on Azure. In the following architecture, the Oracle database has been configured using a hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) to save on licensing costs and maximize performance. Multiple premium or ultra disks (managed disks) are used for performance and availability.
+Oracle GoldenGate allows you to configure your database for high availability by providing bidirectional replication. This approach allows you to set up a *multi-master* or *active-active configuration*. The following diagram is a recommended architecture for Oracle GoldenGate active-active setup on Azure. In the following architecture, the Oracle database has been configured using a hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) to save on licensing costs and maximize performance. The architecture uses multiple premium or ultra disks (managed disks) for performance and availability.
-![Oracle Database using availability zones with Data Guard Broker - FSFO](./media/oracle-reference-architecture/oracledb_gg_az.png)
> [!NOTE] > A similar architecture can be set up using availability sets in regions where availability zones aren't currently available.
-Oracle GoldenGate has processes such as Extract, Pump, and Replicat that help you asynchronously replicate your data from one Oracle database server to another. These processes allow you to set up a bidirectional replication to ensure high availability of your database if there is availability zone-level downtime. In the preceding diagram, the Extract process runs on the same server as your Oracle database, whereas the Data Pump and Replicat processes run on a separate server in the same availability zone. The Replicat process is used to receive data from the database in the other availability zone and commit the data to the Oracle database in its availability zone. Similarly, the Data Pump process sends data that has been extracted by the Extract process to the Replicat process in the other availability zone.
+Oracle GoldenGate has processes such as *Extract*, *Pump*, and *Replicat* that help you asynchronously replicate your data from one Oracle database server to another. These processes allow you to set up a bidirectional replication to ensure high availability of your database if there's availability zone-level downtime.
-While the preceding architecture diagram shows the Data Pump and Replicat process configured on a separate server, you may set up all the Oracle GoldenGate processes on the same server, based on the capacity and usage of your server. Always consult your AWR report and the metrics in Azure to understand the usage pattern of your server.
+In the preceding diagram, the Extract process runs on the same server as your Oracle database. The Data Pump and Replicat processes run on a separate server in the same availability zone. The Replicat process is used to receive data from the database in the other availability zone and commit the data to the Oracle database in its availability zone. Similarly, the Data Pump process sends data that the Extract process extracts to the Replicat process in the other availability zone.
-When setting up Oracle GoldenGate bidirectional replication in different availability zones or different regions, it's important to ensure that the latency between the different components is acceptable for your application. The latency between availability zones and regions could vary and depends on multiple factors. It's recommended that you set up performance tests between your application tier and your database tier in different availability zones and/or regions to confirm that it meets your application performance requirements.
+While the preceding architecture diagram shows the Data Pump and Replicat processes configured on a separate server, you might set up all the Oracle GoldenGate processes on the same server, based on the capacity and usage of your server. Always consult your AWR report and the metrics in Azure to understand the usage pattern of your server.
-The application tier can be set up in its own subnet and the database tier can be separated into its own subnet. When possible, consider using [Azure Application Gateway](../../../application-gateway/overview.md) to load-balance traffic between your application servers. Azure Application Gateway is a robust web traffic load balancer. It provides cookie-based session affinity that keeps a user session on the same server, thus minimizing the conflicts on the database. Alternatives to Application Gateway are [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md) and [Azure Traffic Manager](../../../traffic-manager/traffic-manager-overview.md).
+When setting up Oracle GoldenGate bidirectional replication in different availability zones or different regions, it's important to ensure that the latency between the different components is acceptable for your application. The latency between availability zones and regions can vary. Latency depends on multiple factors. We recommend that you set up performance tests between your application tier and your database tier in different availability zones or regions. The tests can confirm that the configuration meets your application performance requirements.
+
+The application tier can be set up in its own subnet and the database tier can be separated into its own subnet. When possible, consider using [Azure Application Gateway](../../../application-gateway/overview.md) to load-balance traffic between your application servers. Application Gateway is a robust web traffic load balancer. It provides cookie-based session affinity that keeps a user session on the same server, minimizing the conflicts on the database. Alternatives to Application Gateway are [Azure Load Balancer](../../../load-balancer/load-balancer-overview.md) and [Azure Traffic Manager](../../../traffic-manager/traffic-manager-overview.md).
### Oracle Sharding
-Sharding is a data tier pattern that was introduced in Oracle 12.2. It allows you to horizontally partition and scale your data across independent databases. It is a share-nothing architecture where each database is hosted on a dedicated virtual machine, which enables high read and write throughput in addition to resiliency and increased availability. This pattern eliminates single points of failure, provides fault isolation, and enables rolling upgrades without downtime. The downtime of one shard or a data center-level failure does not affect the performance or availability of the other shards in other data centers.
+Sharding is a data tier pattern that was introduced in Oracle 12.2. It allows you to horizontally partition and scale your data across independent databases. It's a share-nothing architecture where each database is hosted on a dedicated virtual machine. This pattern enables high read and write throughput in addition to resiliency and increased availability.
+
+This pattern eliminates single points of failure, provides fault isolation, and enables rolling upgrades without downtime. The downtime of one shard or a data center-level failure doesn't affect the performance or availability of the other shards in other data centers.
-Sharding is suitable for high throughput OLTP applications that can't afford any downtime. All rows with the same sharding key are always guaranteed to be on the same shard, thus increasing performance providing the high consistency. Applications that use sharding must have a well-defined data model and data distribution strategy (consistent hash, range, list, or composite) that primarily accesses data using a sharding key (for example, *customerId* or *accountNum*). Sharding also allows you to store particular sets of data closer to the end customers, thus helping you meet your performance and compliance requirements.
+Sharding is suitable for high throughput OLTP applications that can't afford any downtime. All rows with the same sharding key are always guaranteed to be on the same shard. This fact increases performance, providing high consistency. Applications that use sharding must have a well-defined data model and data distribution strategy, such as consistent hash, range, list, or composite. The strategy primarily accesses data using a sharding key, for example, *customerId* or *accountNum*. Sharding also allows you to store particular sets of data closer to the end customers, thus helping meet your performance and compliance requirements.
-It is recommended that you replicate your shards for high availability and disaster recovery. This setup can be done using Oracle technologies such as Oracle Data Guard or Oracle GoldenGate. A unit of replication can be a shard, a part of a shard, or a group of shards. The availability of a sharded database is not affected by an outage or slowdown of one or more shards. For high availability, the standby shards can be placed in the same availability zone where the primary shards are placed. For disaster recovery, the standby shards can be located in another region. You may also deploy shards in multiple regions to serve traffic in those regions. Read more about configuring high availability and replication of your sharded database in [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html).
+We recommend that you replicate your shards for high availability and disaster recovery. This setup can be done using Oracle technologies such as Oracle Data Guard or Oracle GoldenGate. A unit of replication can be a shard, a part of a shard, or a group of shards. An outage or slowdown of one or more shards doesn't affect the availability of a sharded database.
-Oracle Sharding primarily consists of the following components. More information about these components can be found in [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html):
+For high availability, the standby shards can be placed in the same availability zone where the primary shards are placed. For disaster recovery, the standby shards can be located in another region. You might also deploy shards in multiple regions to serve traffic in those regions. To learn more about configuring high availability and replication of your sharded database, see [Shard-Level High Availability](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html).
-- **Shard catalog** - Special-purpose Oracle database that is a persistent store for all Shard database configuration data. All configuration changes such as adding or removing shards, mapping of the data, and DDLs in a shard database are initiated on the shard catalog. The shard catalog also contains the master copy of all duplicated tables in an SDB.
+Oracle Sharding primarily consists of the following components. For more information, see [Oracle Sharding Overview](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html):
- The shard catalog uses materialized views to automatically replicate changes to duplicated tables in all shards. The shard catalog database also acts as a query coordinator used to process multi-shard queries and queries that do not specify a sharding key.
+- **Shard catalog**. Special-purpose Oracle database that is a persistent store for all Shard database configuration data. All configuration changes such as adding or removing shards, mapping of the data, and DDLs in a shard database are initiated on the shard catalog. The shard catalog also contains the master copy of all duplicated tables in an SDB.
+
+ The shard catalog uses materialized views to automatically replicate changes to duplicated tables in all shards. The shard catalog database also acts as a query coordinator used to process multi-shard queries and queries that don't specify a sharding key.
- Using Oracle Data Guard in conjunction with availability zones or availability sets for shard catalog high availability is a recommended best practice. The availability of the shard catalog has no impact on the availability of the sharded database. A downtime in the shard catalog only affects maintenance operations and multishard queries during the brief period that the Data Guard failover completes. Online transactions continue to be routed and executed by the SDB and are unaffected by a catalog outage.
+ We recommend using Oracle Data Guard with availability zones or availability sets for shard catalog high availability as a best practice. The availability of the shard catalog has no effect on the availability of the sharded database. A downtime in the shard catalog only affects maintenance operations and multishard queries during the brief period that the Data Guard failover completes. The SDB continues to route and run online transactions. A catalog outage doesn't affect them.
-- **Shard directors** - Lightweight services that need to be deployed in each region/availability zone that your shards reside in. Shard Directors are Global Service Managers deployed in the context of Oracle Sharding. For high availability, it is recommended that you deploy at least one shard director in each availability zone that your shards exist in.
+- **Shard directors**. Lightweight services that need to be deployed in each region/availability zone that your shards reside in. Shard Directors are Global Service Managers deployed in the context of Oracle Sharding. For high availability, we recommend that you deploy at least one shard director in each availability zone that your shards exist in.
- When connecting to the database initially, the routing information is set up by a shard director and is cached for subsequent requests, bypassing the shard director. Once the session is established with a shard, all SQL queries and DMLs are supported and executed in the scope of the given shard. This routing is fast and is used for all OLTP workloads that perform intra-shard transactions. It's recommended to use direct routing for all OLTP workloads that require the highest performance and availability. The routing cache automatically refreshes when a shard becomes unavailable or changes occur to the sharding topology.
+ When connecting to the database initially, the shard director sets up the routing information and caches the information for subsequent requests, which bypass the shard director. Once the session is established with a shard, all SQL queries and DMLs are supported and executed in the scope of the given shard. This routing is fast and is used for all OLTP workloads that perform intra-shard transactions. We recommend that you to use direct routing for all OLTP workloads that require the highest performance and availability. The routing cache automatically refreshes when a shard becomes unavailable or changes occur to the sharding topology.
- For high-performance, data-dependent routing, Oracle recommends using a connection pool when accessing data in the sharded database. Oracle connection pools, language-specific libraries, and drivers support Oracle Sharding. Refer to [Oracle Sharding documentation](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html#GUID-3D41F762-BE04-486D-8018-C7A210D809F9) for more details.
+ For high-performance, data-dependent routing, Oracle recommends using a connection pool when accessing data in the sharded database. Oracle connection pools, language-specific libraries, and drivers support Oracle Sharding. For more information, see [Oracle Sharding Overview](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html#GUID-3D41F762-BE04-486D-8018-C7A210D809F9).
-- **Global service** - Global service is similar to the regular database service. In addition to all the properties of a database service, a global service has properties for sharded databases such as region affinity between clients and shard and replication lag tolerance. Only one Global service needs to be created to read/write data to/from a sharded database. When using Active Data Guard and setting up read-only replicas of the shards, you can create another gGobal service for read-only workloads. The client can use these Global services to connect to the database.
+- **Global service**. Global service is similar to the regular database service. In addition to all the properties of a database service, a global service has properties for sharded databases. These properties include region affinity between clients and shard and replication lag tolerance. Only one global service needs to be created to read/write data to and from a sharded database. When using Active Data Guard and setting up read-only replicas of the shards, you can create another global service for read-only workloads. The client can use these global services to connect to the database.
-- **Shard databases** - Shard databases are your Oracle databases. Each database is replicated using Oracle Data Guard in a Broker configuration with Fast-Start Failover (FSFO) enabled. You don't need to set up Data Guard failover and replication on each shard. This is automatically configured and deployed when the shared database is created. If a particular shard fails, Oracle Sharing automatically fails over database connections from the primary to the standby.
+- **Shard databases**. Shard databases are your Oracle databases. Each database is replicated using Oracle Data Guard in a Broker configuration with FSFO enabled. You don't need to set up Data Guard failover and replication on each shard. This aspect is automatically configured and deployed when the shared database is created. If a particular shard fails, Oracle Sharing fails over database connections from the primary to the standby.
-You can deploy and manage Oracle sharded databases with two interfaces: Oracle Enterprise Manager Cloud Control GUI and/or the `GDSCTL` command-line utility. You can even monitor the different shards for availability and performance using Cloud control. The `GDSCTL DEPLOY` command automatically creates the shards and their respective listeners. In addition, this command automatically deploys the replication configuration used for shard-level high availability specified by the administrator.
+You can deploy and manage Oracle sharded databases with two interfaces: Oracle Enterprise Manager Cloud Control GUI and the `GDSCTL` command-line utility. You can even monitor the different shards for availability and performance using Cloud control. The `GDSCTL DEPLOY` command automatically creates the shards and their respective listeners. In addition, this command automatically deploys the replication configuration used for shard-level high availability specified by the administrator.
There are different ways to shard a database:
-* System-managed sharding - Automatically distributes across shards using partitioning
-* User-defined sharding - Allows you to specify the mapping of the data to the shards, which works well when there are regulatory or data-localization requirements)
-* Composite sharding - A combination of system-managed and user-defined sharding for different _shardspaces_
-* Table subpartitions - Similar to a regular partitioned table.
+- System-managed sharding: Automatically distributes across shards using partitioning
+- User-defined sharding: Allows you to specify the mapping of the data to the shards, which works well when there are regulatory or data-localization requirements
+- Composite sharding: A combination of system-managed and user-defined sharding for different _shardspaces_
+- Table subpartitions: Similar to a regular partitioned table
-Read more about the different [sharding methods](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-methods.html) in Oracle's documentation.
+For more information, see [Sharding Methods](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-methods.html).
-While a sharded database may look like a single database to applications and developers, when migrating from a non-sharded database onto a sharded database, careful planning is required to determine which tables will be duplicated versus sharded.
+A sharded database looks like a single database to applications and developers. When you migrate to a sharded database, plan carefully to understand which tables are duplicated versus sharded.
-Duplicated tables are stored on all shards, whereas sharded tables are distributed across different shards. The recommendation is to duplicate small and dimensional tables and distribute/shard the fact tables. Data can be loaded into your sharded database using either the shard catalog as the central coordinator or by running Data Pump on each shard. Read more about [migrating data to a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-loading-data.html) in Oracle's documentation.
+Duplicated tables are stored on all shards, whereas sharded tables are distributed across different shards. We recommend that you duplicate small and dimensional tables and distribute/shard the fact tables. Data can be loaded into your sharded database using either the shard catalog as the central coordinator or by running Data Pump on each shard. For more information, see [Migrating Data to a Sharded Database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-loading-data.html).
#### Oracle Sharding with Data Guard Oracle Data Guard can be used for sharding with system-managed, user-defined, and composite sharding methods.
-The following diagram is a reference architecture for Oracle Sharding with Oracle Data Guard used for high availability of each shard. The architecture diagram shows a _composite sharding method_. The architecture diagram will likely differ for applications with different requirements for data locality, load balancing, high availability, disaster recovery, etc. and may use different method for sharding. Oracle Sharding allows you to meet these requirements and scale horizontally and efficiently by providing these options. A similar architecture can even be deployed using Oracle GoldenGate.
+The following diagram is a reference architecture for Oracle Sharding with Oracle Data Guard used for high availability of each shard. The architecture diagram shows a _composite sharding method_. The architecture diagram likely differs for applications with different requirements for data locality, load balancing, high availability, and disaster recovery. Applications might use different method for sharding. Oracle Sharding allows you to meet these requirements and scale horizontally and efficiently by providing these options. A similar architecture can even be deployed using Oracle GoldenGate.
+
-![Oracle Database Sharding using availability zones with Data Guard Broker - FSFO](./media/oracle-reference-architecture/oracledb_dg_sh_az.png)
+System-managed sharding is the easiest to configure and manage. User-defined sharding or composite sharding is well suited for scenarios where your data and application are geo-distributed or in scenarios where you need to have control over the replication of each shard.
-While system-managed sharding is the easiest to configure and manage, user-defined sharding or composite sharding is well suited for scenarios where your data and application are geo-distributed or in scenarios where you need to have control over the replication of each shard.
+In the preceding architecture, composite sharding is used to geodistribute the data and horizontally scale out your application tiers. Composite sharding is a combination of system-managed and user-defined sharding and thus provides the benefit of both methods. In the preceding scenario, data is first sharded across multiple shardspaces separated by region. Then, the data is further partitioned by using consistent hash across multiple shards in the shardspace.
-In the preceding architecture, composite sharding is used to geo-distribute the data and horizontally scale-out your application tiers. Composite sharding is a combination of system-managed and user-defined sharding and thus provides the benefit of both methods. In the preceding scenario, data is first sharded across multiple shardspaces separated by region. Then, the data is further partitioned by consistent hash across multiple shards in the shardspace. Each shardspace contains multiple shardgroups. Each shardgroup has multiple shards and is a "unit" of replication, in this case. Each shardgroup contains all the data in the shardspace. Shardgroups A1 and B1 are primary shardgroups, while shardgroups A2 and B2 are standbys. You may choose to have individual shards be the unit of replication, rather than a shardgroup.
+Each shardspace contains multiple shardgroups. Each shardgroup has multiple shards and is a unit of replication. Each shardgroup contains all the data in the shardspace. Shardgroups A1 and B1 are primary shardgroups, while shardgroups A2 and B2 are standbys. You might choose to have individual shards be the unit of replication, rather than a shardgroup.
-In the preceding architecture, a GSM/shard director is deployed in every availability zone for high availability. The recommendation is to deploy at least one GSM/shard director per data center/region. Additionally, an instance of the application server is deployed in every availability zone that contains a shardgroup. This setup allows the application to keep the latency between the application server and the database/shardgroup low. If a database fails, the application server in the same zone as the standby database can handle requests once the database role transition happens. Azure Application Gateway and the shard director keep track of the request and response latency and route requests accordingly.
+In the preceding architecture, a Global Service Manager (GSM)/shard director is deployed in every availability zone for high availability. We recommend that you deploy at least one GSM/shard director per data center/region. Additionally, an instance of the application server is deployed in every availability zone that contains a shardgroup. This setup allows the application to keep the latency between the application server and the database/shardgroup low. If a database fails, the application server in the same zone as the standby database can handle requests once the database role transition happens. Azure Application Gateway and the shard director keep track of the request and response latency and route requests accordingly.
-From an application standpoint, the client system makes a request to Azure Application Gateway (or other load-balancing technologies in Azure) which redirects the request to the region closest to the client. Azure Application Gateway also supports sticky sessions, so any requests coming from the same client are routed to the same application server. The application server uses connection pooling in data access drivers. This feature is available in drivers such as JDBC, ODP.NET, OCI, etc. The drivers can recognize sharding keys specified as part of the request. [Oracle Universal Connection Pool (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) for JDBC clients can enable non-Oracle application clients such as Apache Tomcat and IIS to work with Oracle Sharding.
+From an application standpoint, the client system makes a request to Azure Application Gateway or other load-balancing technologies in Azure, which redirects the request to the region closest to the client. Azure Application Gateway also supports sticky sessions, so any requests coming from the same client are routed to the same application server. The application server uses connection pooling in data access drivers. This feature is available in drivers such as JDBC, ODP.NET, and OCI. The drivers can recognize sharding keys specified as part of the request. Oracle Universal Connection Pool (UCP) for JDBC clients can enable non-Oracle application clients such as Apache Tomcat and IIS to work with Oracle Sharding. For more information, see [Overview of UCP Shared Pool for Database Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html).
During the initial request, the application server connects to the shard director in its region to get routing information for the shard that the request needs to be routed to. Based on the sharding key passed, the director routes the application server to the respective shard. The application server caches this information by building a map, and for subsequent requests, bypasses the shard director and routes requests straight to the shard. #### Oracle Sharding with GoldenGate
-The following diagram is a reference architecture for Oracle Sharding with Oracle GoldenGate for in-region high availability of each shard. As opposed to the preceding architecture, this architecture only portrays high availability within a single Azure region (multiple availability zones). One could deploy a multi-region high availability sharded database (similar to the preceding example) using Oracle GoldenGate.
+The following diagram is a reference architecture for Oracle Sharding with Oracle GoldenGate for in-region high availability of each shard. As opposed to the preceding architecture, this architecture only portrays high availability within a single Azure region, with multiple availability zones. You can deploy a multi-region high availability sharded database, similar to the preceding example, by using Oracle GoldenGate.
-![Oracle Database Sharding using availability zones with GoldenGate](./media/oracle-reference-architecture/oracledb_gg_sh_az.png)
-The preceding reference architecture uses the _system-managed_ sharding method to shard the data. Since Oracle GoldenGate replication is done at a chunk level, half the data replicated to one shard can be replicated to another shard. The other half can be replicated to a different shard.
+The preceding reference architecture uses the _system-managed_ sharding method to shard the data. Since Oracle GoldenGate replication is done at a chunk level, half the data replicated to one shard can be replicated to another shard. The other half can be replicated to a different shard.
-The way the data gets replicated depends on the replication factor. With a replication factor of 2, you will have two copies of each chunk of data across your three shards in the shardgroup. Similarly, with a replication factor of 3 and three shards in your shardgroup, all the data in each shard will be replicated to every other shard in the shardgroup. Each shard in the shardgroup can have a different replication factor. This setup helps you define your high availability and disaster recovery design efficiently within a shardgroup and across multiple shardgroups.
+The way the data gets replicated depends on the replication factor. With a replication factor of two, you have two copies of each chunk of data across your three shards in the shardgroup. Similarly, with a replication factor of three and three shards in your shardgroup, all the data in each shard is replicated to every other shard in the shardgroup. Each shard in the shardgroup can have a different replication factor. This setup helps you define your high availability and disaster recovery design efficiently within a shardgroup and across multiple shardgroups.
-In the preceding architecture, shardgroup A and shardgroup B both contain the same data but reside in different availability zones. If both shardgroup A and shardgroup B have the same replication factor of 3, each row/chunk of your sharded table will be replicated 6 times across the two shardgroups. If shardgroup A has a replication factor of 3 and shardgroup B has a replication factor of 2, each row/chunk will be replicated 5 times across the two shardgroups.
+In the preceding architecture, shardgroup A and shardgroup B both contain the same data but reside in different availability zones. If both shardgroup A and shardgroup B have the same replication factor of three, each row/chunk of your sharded table is replicated six times across the two shardgroups. If shardgroup A has a replication factor of three and shardgroup B has a replication factor of two, each row/chunk is replicated five times across the two shardgroups.
-This setup prevents data loss if an instance-level or availability zone-level failure occurs. The application layer is able to read from and write to each shard. To minimize conflicts, Oracle Sharding designates a "master chunk" for each range of hash values. This feature ensures that writes requests for a particular chunk are directed to the corresponding chunk. In addition, Oracle GoldenGate provides automatic conflict detection and resolution to handle any conflicts that may arise. For more information and limitations of implementing GoldenGate with Oracle Sharding, see Oracle's documentation on using [Oracle GoldenGate with a sharded database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72).
+This setup prevents data loss if an instance-level or availability zone-level failure occurs. The application layer is able to read from and write to each shard. To minimize conflicts, Oracle Sharding designates a *master chunk* for each range of hash values. This feature ensures that write requests for a particular chunk are directed to the corresponding chunk. In addition, Oracle GoldenGate provides automatic conflict detection and resolution to handle any conflicts that might arise. For more information and limitations of implementing GoldenGate with Oracle Sharding, see [Using Oracle GoldenGate with a Sharded Database](https://docs.oracle.com/en/database/oracle/oracle-database/18/shard/sharding-high-availability.html#GUID-4FC0AC46-0B8B-4670-BBE4-052228492C72).
-In the preceding architecture, a GSM/shard director is deployed in every availability zone for high availability. The recommendation is to deploy at least one GSM/shard director per data center or region. Additionally, an instance of the application server is deployed in every availability zone that contains a shardgroup. This setup allows the application to keep the latency between the application server and the database/shardgroup low. If a database fails, the application server in the same zone as the standby database can handle requests once the database role transitions. Azure Application Gateway and the shard director keep track of the request and response latency and route requests accordingly.
+In the preceding architecture, a GSM/shard director is deployed in every availability zone for high availability. We recommend that you deploy at least one GSM/shard director per data center or region. An instance of the application server is deployed in every availability zone that contains a shardgroup. This setup allows the application to keep the latency between the application server and the database/shardgroup low. If a database fails, the application server in the same zone as the standby database can handle requests once the database role transitions. Azure Application Gateway and the shard director keep track of the request and response latency and route requests accordingly.
-From an application standpoint, the client system makes a request to Azure Application Gateway (or other load-balancing technologies in Azure) which redirects the request to the region closest to the client. Azure Application Gateway also supports sticky sessions, so any requests coming from the same client are routed to the same application server. The application server uses connection pooling in data access drivers. This feature is available in drivers such as JDBC, ODP.NET, OCI, etc. The drivers can recognize sharding keys specified as part of the request. [Oracle Universal Connection Pool (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) for JDBC clients can enable non-Oracle application clients such as Apache Tomcat and IIS to work with Oracle Sharding.
+From an application standpoint, the client system makes a request to Azure Application Gateway or other load-balancing technologies in Azure, which redirects the request to the region closest to the client. Azure Application Gateway also supports sticky sessions, so any requests coming from the same client are routed to the same application server. The application server uses connection pooling in data access drivers. This feature is available in drivers such as JDBC, ODP.NET, and OCI. The drivers can recognize sharding keys specified as part of the request. [Oracle Universal Connection Pool (UCP)](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jjucp/ucp-database-sharding-overview.html) for JDBC clients can enable non-Oracle application clients such as Apache Tomcat and IIS to work with Oracle Sharding.
During the initial request, the application server connects to the shard director in its region to get routing information for the shard that the request needs to be routed to. Based on the sharding key passed, the director routes the application server to the respective shard. The application server caches this information by building a map, and for subsequent requests, bypasses the shard director and routes requests straight to the shard. ## Patching and maintenance
-When deploying your Oracle workloads to Azure, Microsoft takes care of all host OS-level patching. Any planned OS-level maintenance is communicated to customers in advance to allow the customer for this planned maintenance. Two servers from two different Availability Zones are never patched simultaneously. See [Manage the availability of virtual machines](../../availability.md) for more details on VM maintenance and patching.
+When you deploy your Oracle workloads to Azure, Microsoft takes care of all host operating system level patching. Microsoft communicates any planned operating system level maintenance to customers in advance. Two servers from two different availability zones are never patched simultaneously. For more information on VM maintenance and patching, see [Availability options for Azure Virtual Machines](../../availability.md).
-Patching your virtual machine operating system can be automated using [Azure Automation Update Management](../../../automation/update-management/overview.md). Patching and maintaining your Oracle database can be automated and scheduled using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [Azure Automation Update Management](../../../automation/update-management/overview.md) to minimize downtime. See [Continuous Delivery and Blue/Green Deployments](/devops/deliver/what-is-continuous-delivery) to understand how it can be used in the context of your Oracle databases.
+Patching your virtual machine operating system can be automated using [Azure Automation Update Management](../../../automation/update-management/overview.md). Patching and maintaining your Oracle database can be automated and scheduled using [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) or [Azure Automation Update Management](../../../automation/update-management/overview.md) to minimize downtime. For more information about continuous delivery and blue/green deployments, see [Progressive exposure techniques](/devops/deliver/what-is-continuous-delivery#progressive-exposure-techniques).
## Architecture and design considerations - Consider using hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) for your Oracle Database VM to save on licensing costs and maximize performance. Use multiple premium or ultra disks (managed disks) for performance and availability.-- When using managed disks, the disk/device name may change on reboots. It's recommended that you use the device UUID instead of the name to ensure your mounts persist across reboots. For more information, see [Configure software RAID on a Linux VM](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab).
+- When you use managed disks, the disk/device name might change on restart. We recommend that you use the device UUID instead of the name to ensure your mounts persist in sprite of restarting. For more information, see [Add the new file system to /etc/fstab](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab).
- Use availability zones to achieve high availability in-region.-- Consider using ultra disks (when available) or premium disks for your Oracle database.
+- Consider using ultra disks when available or premium disks for your Oracle database.
- Consider setting up a standby Oracle database in another Azure region using Oracle Data Guard. - Consider using [proximity placement groups](../../co-location.md#proximity-placement-groups) to reduce the latency between your application and database tier. - Set up [Oracle Enterprise Manager](https://docs.oracle.com/en/enterprise-manager/) for management, monitoring, and logging.-- Consider using Oracle Automatic Storage Management (ASM) for streamlined storage management for your database.
+- Consider using Oracle Automatic Storage Management for streamlined storage management for your database.
- Use [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) to manage patching and updates to your database without any downtime.-- Tweak your application code to add cloud-native patterns such as [retry pattern](/azure/architecture/patterns/retry), [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker), and other patterns defined in the [Cloud Design Patterns guide](/azure/architecture/patterns/) that may help your application be more resilient.
+- Tweak your application code to add cloud-native patterns that might help your application be more resilient. Consider patterns such as [retry pattern](/azure/architecture/patterns/retry), [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker), and others defined in the [Cloud Design Patterns guide](/azure/architecture/patterns/).
## Next steps
Review the following Oracle reference articles that apply to your scenario.
- [Introduction to Oracle Data Guard](https://docs.oracle.com/en/database/oracle/oracle-database/18/sbydb/introduction-to-oracle-data-guard-concepts.html#GUID-5E73667D-4A56-445E-911F-1E99092DD8D7) - [Oracle Data Guard Broker Concepts](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/oracle-data-guard-broker-concepts.html) - [Configuring Oracle GoldenGate for Active-Active High Availability](https://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_bidirectional.htm#GWUAD282)-- [Overview of Oracle Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html)
+- [Oracle Sharding Overview](https://docs.oracle.com/en/database/oracle/oracle-database/19/shard/sharding-overview.html)
- [Oracle Active Data Guard Far Sync Zero Data Loss at Any Distance](https://www.oracle.com/technetwork/database/availability/farsync-2267608.pdf)
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
Previously updated : 05/12/2020 Last updated : 04/11/2023 # Oracle VM images and their deployment on Microsoft Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
-This article covers information about Oracle solutions based on virtual machine images published by Oracle in the Azure Marketplace. If you are interested in cross-cloud application solutions with Oracle Cloud Infrastructure, see [Oracle application solutions integrating Microsoft Azure and Oracle Cloud Infrastructure](oracle-oci-overview.md).
+This article covers information about Oracle solutions based on virtual machine (VM) images published by Oracle in the Azure Marketplace. If you're interested in cross-cloud application solutions with Oracle Cloud Infrastructure, see [Oracle application solutions integrating Microsoft Azure and Oracle Cloud Infrastructure](oracle-oci-overview.md).
To get a list of currently available images, run the following command: ```azurecli-interactive
-az vm image list --publisher oracle -o table --all
+az vm image list --publisher oracle --output table --all
```
-As of April 2023 the following images are available:
+As of April 2023, the following images are available:
```output Architecture Offer Publisher Sku Urn Version
x64 weblogic-141100-jdk8-rhel76 Oracle owls-141100-jdk8-rhel
x64 weblogic-141100-jdk8-rhel76 Oracle owls-141100-jdk8-rhel76 Oracle:weblogic-141100-jdk8-rhel76:owls-141100-jdk8-rhel76:1.1.3 1.1.3 ```
-These images are considered "Bring Your Own License" and as such you will only be charged for compute, storage, and networking costs incurred by running a VM. It is assumed you are properly licensed to use Oracle software and that you have a current support agreement in place with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. See the published [Oracle and Microsoft](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html) note for details on license mobility.
+These images are bring-your-own-license. You're charged only for compute, storage, and networking costs incurred by running a VM. You must have the proper licensed to use Oracle software and have a current support agreement in place with Oracle. Oracle has guaranteed license mobility from on-premises to Azure. For more information about license mobility, see [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/technetwork/topics/cloud/faq-1963009.html).
-Individuals can also choose to base their solutions on a custom image they create from scratch in Azure or upload a custom image from their on premises environment.
+You can also choose to base your solutions on a custom image that you create from scratch in Azure or upload a custom image from your on-premises environment.
## Oracle database VM images
-Oracle supports running Oracle Database 12.1 and higher Standard and Enterprise editions in Azure on virtual machine images based on Oracle Linux. For the best performance for production workloads of Oracle Database on Azure, be sure to properly size the VM image and use Premium SSD or Ultra SSD Managed Disks. For instructions on how to quickly get an Oracle Database up and running in Azure using the Oracle published VM image, [try the Oracle Database Quickstart walkthrough](oracle-database-quick-create.md).
+Oracle supports running Oracle Database 12.1 and higher Standard and Enterprise editions in Azure on VM images based on Oracle Linux. For the best performance for production workloads of Oracle Database on Azure, be sure to properly size the VM image and use Premium SSD or Ultra SSD Managed Disks. For instructions on how to quickly get an Oracle Database up and running in Azure using the Oracle published VM image, see [Create an Oracle Database in an Azure VM](oracle-database-quick-create.md).
### Attached disk configuration options
-Attached disks rely on the Azure Blob storage service. Each standard disk is capable of a theoretical maximum of approximately 500 input/output operations per second (IOPS). Our premium disk offering is preferred for high-performance database workloads and can achieve up to 5000 IOps per disk. You can use a single disk if that meets your performance needs. However, you can improve the effective IOPS performance if you use multiple attached disks, spread database data across them, and then use Oracle Automatic Storage Management (ASM). See [Oracle Automatic Storage overview](https://www.oracle.com/technetwork/database/index-100339.html) for more Oracle ASM specific information. For an example of how to install and configure Oracle ASM on a Linux Azure VM, see the [Installing and Configuring Oracle Automated Storage Management](configure-oracle-asm.md) tutorial.
+Attached disks rely on the Azure Blob storage service. Each standard disk is capable of a theoretical maximum of approximately 500 input/output operations per second (IOPS). Our premium disk offering is preferred for high-performance database workloads and can achieve up to 5000 IOPS per disk.
+
+You can use a single disk if that meets your performance needs. However, you can improve the effective IOPS performance if you use multiple attached disks, spread database data across them, and then use Oracle Automatic Storage Management (ASM). For more information, see [The Foundation for Oracle Storage Management](https://www.oracle.com/technetwork/database/index-100339.html). For an example of how to install and configure Oracle ASM on a Linux Azure VM, see [Set up Oracle ASM on an Azure Linux virtual machine](configure-oracle-asm.md).
### Shared storage configuration options
-Azure NetApp Files was designed to meet the core requirements of running high-performance workloads like databases in the cloud, and provides;
+Azure NetApp Files was designed to run high-performance workloads like databases in the cloud. The service provides the following advantages:
- Azure native shared NFS storage service for running Oracle workloads either through VM native NFS client, or Oracle dNFS - Scalable performance tiers that reflect the real-world range of IOPS demands - Low latency-- High availability, high durability and manageability at scale, typically demanded by mission critical enterprise workloads (like SAP and Oracle)
+- High availability, high durability, and manageability at scale, typically demanded by mission critical enterprise workloads, like SAP and Oracle
- Fast and efficient backup and recovery, to achieve the most aggressive RTO and RPO SLAs
-These capabilities are possible because Azure NetApp Files is based on NetApp® ONTAP® all-flash systems running within Azure data center environment – as an Azure Native service. The result is an ideal database storage technology that can be provisioned and consumed just like other Azure storage options. See [Azure NetApp Files documentation](../../../azure-netapp-files/index.yml) for more information on how to deploy and access Azure NetApp Files NFS volumes. See [Oracle on Azure Deployment Best Practice Guide Using Azure NetApp Files](https://www.netapp.com/us/media/tr-4780.pdf) for best practice recommendations for operating an Oracle database on Azure NetApp Files.
+These capabilities are possible because Azure NetApp Files is based on NetApp® ONTAP® all-flash systems that run in Azure data center environment as an Azure Native service. The result is an ideal database storage technology that can be provisioned and consumed just like other Azure storage options. For more information on how to deploy and access Azure NetApp Files NFS volumes, see [Azure NetApp Files](../../../azure-netapp-files/index.yml). For best practice recommendations for operating an Oracle database on Azure NetApp Files, see [Oracle Databases on Microsoft Azure](https://www.netapp.com/us/media/tr-4780.pdf).
+
+## Licensing Oracle Database and software on Azure
-## Licensing Oracle Database & software on Azure
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
-Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table is not applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled (as stated in the policy document). The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
-Oracle databases generally require higher memory and IO. For this reason, [Memory Optimized VMs](../../sizes-memory.md) are recommended for these workloads. To optimize your workloads further, [Constrained Core vCPUs](../../constrained-vcpu.md) are recommended for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count.
+Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](../../sizes-memory.md) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](../../constrained-vcpu.md) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count.
-When migrating Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in the [Oracle on Azure FAQ](https://www.oracle.com/cloud/technologies/oracle-azure-faq.html)
+When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/technologies/oracle-azure-faq.html).
## High availability and disaster recovery considerations
-When using Oracle databases in Azure, you are responsible for implementing a high availability and disaster recovery solution to avoid any downtime.
+When using Oracle databases in Azure, you're responsible for implementing a high availability and disaster recovery solution to avoid any downtime.
-High availability and disaster recovery for Oracle Database Enterprise Edition (without relying on Oracle RAC) can be achieved on Azure using [Data Guard, Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html), or [Oracle GoldenGate](https://www.oracle.com/technetwork/middleware/goldengate), with two databases on two separate virtual machines. Both virtual machines should be in the same [virtual network](../../../virtual-network/index.yml) to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Should you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway.
+You can implement high availability and disaster recovery for Oracle Database Enterprise Edition by using [Data Guard, Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html), or [Oracle GoldenGate](https://www.oracle.com/technetwork/middleware/goldengate). The approach requires two databases on two separate VMs, which should be in the same [virtual network](../../../virtual-network/index.yml) to ensure they can access each other over the private persistent IP address.
-The tutorial [Implement Oracle Data Guard on Azure](configure-oracle-dataguard.md) walks you through the basic setup procedure on Azure.
+We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see [Implement Oracle Data Guard on an Azure Linux virtual machine](configure-oracle-dataguard.md).
-With Oracle Data Guard, high availability can be achieved with a primary database in one virtual machine, a secondary (standby) database in another virtual machine, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html) and [GoldenGate](https://docs.oracle.com/goldengate/1212/gg-winux/https://docsupdatetracker.net/index.html) documentation at the Oracle website. If you need read-write access to the copy of the database, you can use [Oracle Active Data Guard](https://www.oracle.com/uk/products/database/options/active-data-guard/overview/https://docsupdatetracker.net/index.html).
+With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html) and [GoldenGate](https://docs.oracle.com/goldengate/1212/gg-winux/https://docsupdatetracker.net/index.html). If you need read-write access to the copy of the database, you can use [Oracle Active Data Guard](https://www.oracle.com/uk/products/database/options/active-data-guard/overview/https://docsupdatetracker.net/index.html).
-The tutorial [Implement Oracle GoldenGate on Azure](configure-oracle-golden-gate.md) walks you through the basic setup procedure on Azure.
+To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md).
-In addition to having an HA and DR solution architected in Azure, you should have a backup strategy in place to restore your database. The tutorial [Backup and recover an Oracle Database](./oracle-overview.md) walks you through the basic procedure for establishing a consistent backup.
+In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database. To walk through the basic procedure for establishing a consistent backup, see [Overview of Oracle Applications and solutions on Azure](./oracle-overview.md).
## Support for JD Edwards
-According to Oracle Support note [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html), JD Edwards EnterpriseOne versions 9.2 and above are supported on **any public cloud offering** that meets their specific `Minimum Technical Requirements` (MTR). You need to create custom images that meet their MTR specifications for OS and software application compatibility.
+According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on *any public cloud offering* that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html).
-## Oracle WebLogic Server virtual machine offers
+## Oracle WebLogic Server VM offers
-Oracle and Microsoft are collaborating to bring WebLogic Server to the Azure Marketplace in the form of a collection of Azure Application offers. These offers are described in the article [Oracle WebLogic Server Azure Applications](oracle-weblogic.md).
+Oracle and Microsoft are collaborating to bring WebLogic Server to the Azure Marketplace in the form of Azure Application offers. For more information about these offers, see [What are solutions for running Oracle WebLogic Server](oracle-weblogic.md).
-### Oracle WebLogic Server virtual machine images
+### Oracle WebLogic Server VM images
-- **Clustering is supported on Enterprise Edition only.** You are licensed to use WebLogic clustering only when using the Enterprise Edition of Oracle WebLogic Server. Do not use clustering with Oracle WebLogic Server Standard Edition.
+- **Clustering is supported on Enterprise Edition only.** You're licensed to use WebLogic clustering only when you use the Enterprise Edition of Oracle WebLogic Server. Don't use clustering with Oracle WebLogic Server Standard Edition.
- **UDP multicast is not supported.** Azure supports UDP unicasting, but not multicasting or broadcasting. Oracle WebLogic Server is able to rely on Azure UDP unicast capabilities. For best results relying on UDP unicast, we recommend that the WebLogic cluster size is kept static, or kept with no more than 10 managed servers.-- **Oracle WebLogic Server expects public and private ports to be the same for T3 access (for example, when using Enterprise JavaBeans).** Consider a multi-tier scenario where a service layer (EJB) application is running on an Oracle WebLogic Server cluster consisting of two or more VMs, in a virtual network named *SLWLS*. The client tier is in a different subnet in the same virtual network, running a simple Java program trying to call EJB in the service layer. Because it is necessary to load balance the service layer, a public load-balanced endpoint needs to be created for the virtual machines in the Oracle WebLogic Server cluster. If the private port that you specify is different from the public port (for example, 7006:7008), an error such as the following occurs:
+- **Oracle WebLogic Server expects public and private ports to be the same for T3 access, for example, when using Enterprise JavaBeans.** Consider a multi-tier scenario where a service layer (EJB) application is running on an Oracle WebLogic Server cluster consisting of two or more VMs, in a virtual network named *SLWLS*. The client tier is in a different subnet in the same virtual network, running a simple Java program trying to call EJB in the service layer. Because you must load balance the service layer, a public load-balanced endpoint needs to be created for the VMs in the Oracle WebLogic Server cluster. If the private port that you specify is different from the public port, for example, 7006:7008, an error such as the following occurs:
-```output
- [java] javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://example.cloudapp.net:7006:
+ ```output
+ [java] javax.naming.CommunicationException [Root exception is java.net.ConnectException: t3://example.cloudapp.net:7006:
- Bootstrap to: example.cloudapp.net/138.91.142.178:7006' over: 't3' got an error or timed out]
-```
+ Bootstrap to: example.cloudapp.net/138.91.142.178:7006' over: 't3' got an error or timed out]
+ ```
- This is because for any remote T3 access, Oracle WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same. In the preceding case, the client is accessing port 7006 (the load balancer port) and the managed server is listening on 7008 (the private port). This restriction is applicable only for T3 access, not HTTP.
+ This error occurs because for any remote T3 access, Oracle WebLogic Server expects the load balancer port and the WebLogic managed server port to be the same. In the preceding case, the client is accessing port 7006, which is the load balancer port, and the managed server is listening on 7008, which is the private port. This restriction is applicable only for T3 access, not HTTP.
- To avoid this issue, use one of the following workarounds:
+ To avoid this issue, use one of the following workarounds:
-- Use the same private and public port numbers for load balanced endpoints dedicated to T3 access.-- Include the following JVM parameter when starting Oracle WebLogic Server:
+ - Use the same private and public port numbers for load balanced endpoints dedicated to T3 access.
+ - Include the following JVM parameter when starting Oracle WebLogic Server:
-```config
- -Dweblogic.rjvm.enableprotocolswitch=true
-```
+ ```config
+ -Dweblogic.rjvm.enableprotocolswitch=true
+ ```
+
+- **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in Oracle WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This approach can be done as long as you use a fixed port number for each of the managed servers, not dynamically assigned from a range, and don't start more managed servers than there are machines the administrator is tracking. That is, there's no more than one managed server per VM.
+
+ If your configuration results in more Oracle WebLogic Servers being started than there are VMs, it isn't possible for more than one of those instances of Oracle WebLogic Servers to bind to a given port number. That is, if multiple Oracle WebLogic Server instances share the same virtual machine, the others on that VM fail.
-For related information, see KB article **860340.1** at [support.oracle.com](https://support.oracle.com).
+ If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing isn't possible because Azure doesn't support mapping from a single public port to multiple private ports, as would be required for this configuration.
-- **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in Oracle WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking. That is, there is no more than one managed server per virtual machine). If your configuration results in more Oracle WebLogic Servers being started than there are virtual machines (that is, where multiple Oracle WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of Oracle WebLogic Servers to bind to a given port number. The others on that virtual machine fail.
+- **Multiple instances of Oracle WebLogic Server on a VM.** Depending on your deployment requirements, you might consider running multiple instances of Oracle WebLogic Server on the same VM, if the VM is large enough. For example, on a midsize VM, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture. Running multiple instances of Oracle WebLogic Server on just one VM would be such a single point.
- If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration.
-- **Multiple instances of Oracle WebLogic Server on a virtual machine.** Depending on your deploymentΓÇÖs requirements, you might consider running multiple instances of Oracle WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a midsize virtual machine, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of Oracle WebLogic Server. Using at least two virtual machines could be a better approach, and each virtual machine could then run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it is currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the load-balanced servers to be distributed among unique virtual machines.
+ Using at least two VMs could be a better approach. Each VM can run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it's currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same VM. Azure Load Balancer requires the load-balanced servers to be distributed among unique VMs.
## Next steps
-You now have an overview of current Oracle solutions based on virtual machine images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure.
+You now have an overview of current Oracle solutions based on VM images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure.
> [!div class="nextstepaction"] > [Create an Oracle database on Azure](oracle-database-quick-create.md)
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
The following instructions walk you through the initial deployment process for a
``` 1. Accept the image terms.-
+
+ Option 1
```azurecli az vm image terms accept --publisher redhat --offer rhel-byos --plan <SKU value here> -o=jsonc-
- # Example:
- az vm image terms accept --publisher redhat --offer rhel-byos --plan rhel-lvm75 -o=jsonc
-
- OR
-
- az vm image terms accept --urn redhat:rhel-byos:rhel-lvm8:8.0.20190620
+ ```
+ Example
+ ```azurecli
+ az vm image terms accept --publisher redhat --offer rhel-byos --plan rhel-lvm87 -o=jsonc
+ ```
+ Option2
+ ```azurecli
+ az vm image terms accept --urn <SKU value here>
+ ```
+ Example
+ ```azurecli
+ az vm image terms accept --urn RedHat:rhel-byos:rhel-lvm87:8.7.2023021503
``` >[!NOTE]
The following instructions walk you through the initial deployment process for a
```azurecli az vm create -n <VM name> -g <resource group name> --image <image urn> --validate-
- # Example:
+ ```
+ Example:
+ ```azurecli
az vm create -n rhel-byos-vm -g rhel-byos-group --image redhat:rhel-byos:rhel-lvm8:latest --validate ```
The following instructions walk you through the initial deployment process for a
```azurecli az vm create -n <VM name> -g <resource group name> --image <image urn>-
- # Example:
+ ```
+ Example:
+ ```azurecli
az vm create -n rhel-byos-vm -g rhel-byos-group --image redhat:rhel-byos:rhel-lvm8:latest ```
The following script is an example. Replace the resource group, location, VM nam
# Define user name and blank password $securePassword = ConvertTo-SecureString 'TestPassword1!' -AsPlainText -Force $cred = New-Object System.Management.Automation.PSCredential("azureuser",$securePassword)
- Get-AzureRmMarketplaceTerms -Publisher redhat -Product rhel-byos -Name rhel-lvm75 | SetAzureRmMarketplaceTerms -Accept
+ Get-AzureRmMarketplaceTerms -Publisher redhat -Product rhel-byos -Name rhel-lvm87 | SetAzureRmMarketplaceTerms -Accept
# Create a resource group New-AzureRmResourceGroup -Name $resourceGroup -Location $location
The following script is an example. Replace the resource group, location, VM nam
# Create a virtual machine configuration $vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize Standard_D3_v2 | Set-AzureRmVMOperatingSystem -Linux -ComputerName $vmName -Credential $cred |
- Set-AzureRmVMSourceImage -PublisherName redhat -Offer rhel-byos -Skus rhel-lvm75 -Version latest | Add- AzureRmVMNetworkInterface -Id $nic.Id
- Set-AzureRmVMPlan -VM $vmConfig -Publisher redhat -Product rhel-byos -Name "rhel-lvm75"
+ Set-AzureRmVMSourceImage -PublisherName redhat -Offer rhel-byos -Skus rhel-lvm87 -Version latest | Add- AzureRmVMNetworkInterface -Id $nic.Id
+ Set-AzureRmVMPlan -VM $vmConfig -Publisher redhat -Product rhel-byos -Name "rhel-lvm87"
# Configure SSH Keys $sshPublicKey = Get-Content "$env:USERPROFILE\.ssh\id_rsa.pub"
For steps to apply Azure Disk Encryption, see [Azure Disk Encryption scenarios o
- If you attempt to provision a VM on a subscription that isn't enabled for this offer, you get the following message: ```
- "Offer with PublisherId: redhat, OfferId: rhel-byos, PlanId: rhel-lvm75 is private and can not be purchased by subscriptionId: GUID"
+ "Offer with PublisherId: redhat, OfferId: rhel-byos, PlanId: rhel-lvm87 is private and can not be purchased by subscriptionId: GUID"
``` In this case, contact Microsoft or Red Hat to enable your subscription.
For steps to apply Azure Disk Encryption, see [Azure Disk Encryption scenarios o
az vm create ΓÇôimage \ "/subscriptions/GUID/resourceGroups/GroupName/providers/Microsoft.Compute/galleries/GalleryName/images/ImageName/versions/1.0.0" \ -g AnotherGroupName --location EastUS2 -n VMName \
- --plan-publisher redhat --plan-product rhel-byos --plan-name rhel-lvm75
+ --plan-publisher redhat --plan-product rhel-byos --plan-name rhel-lvm87
``` Note the plan parameters in the final line.
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
Previously updated : 03/15/2023- Last updated : 04/12/2023+ # Quickstart: Create a mesh network with Azure Virtual Network Manager using Azure PowerShell
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Run `Connect-AzAccount` to create a local connection with Azure.
> [!IMPORTANT]
-> Perform this quickstart using Powershell locally, not through Azure Cloud Shell. The version of `Az.Network` in Azure Cloud Shell does not currently support the Azure Virtual Network Manager cmdlets.
+> Perform this quickstart using PowerShell locally, not through Azure Cloud Shell. The version of `Az.Network` in Azure Cloud Shell does not currently support the Azure Virtual Network Manager cmdlets.
+## Sign in to your Azure account and select your subscription
+
+To begin your configuration, sign in to your Azure account. Use the following examples to help you connect:
+
+Sign in to Azure
+
+```azurepowershell
+Connect-AzAccount
+```
+
+Connect to your subscription
+
+```azurepowershell
+Set-AzContext -Subscription <subscription name or id>
+```
## Install Azure PowerShell module Install the latest *Az.Network* Azure PowerShell module using this command:
-```azurepowershell-interactive
+```azurepowershell
Install-Module -Name Az.Network -RequiredVersion 5.3.0 ```- ## Create a resource group
-Before you can create an Azure Virtual Network Manager, you have to create a resource group to host the Network Manager. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a resource group named **myAVNMResourceGroup** in the **WestUS** location.
+Before you can create an Azure Virtual Network Manager, you have to create a resource group to host the Network Manager. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a resource group named **vnm-learn-eastus-001ResourceGroup** in the **WestUS** location.
-```azurepowershell-interactive
+```azurepowershell
-$location = "West US"
+$location = "East US"
$rg = @{
- Name = 'myAVNMResourceGroup'
+ Name = 'rg-learn-eastus-001'
Location = $location } New-AzResourceGroup @rg ```
-## Create Virtual Network Manager
+## Define the scope and access type
-1. Define the scope and access type this Azure Virtual Network Manager instance have. You can choose to create the scope with subscriptions group or management group or a combination of both. Create the scope by using New-AzNetworkManagerScope.
+Define the scope and access type for the Azure Virtual Network Manager instance with [New-AzNetworkManagerScope](/powershell/module/az.network/new-aznetworkmanagerscope) This example defines a scope with a single subscription and sets the access type to **Connectivity**. Replace with the **<subscription_id>** of the subscription you want to manage with Azure Virtual Network Manager.
- ```azurepowershell-interactive
-
- Import-Module -Name Az.Network -RequiredVersion "4.15.1"
-
- [System.Collections.Generic.List[string]]$subGroup = @()
- $subGroup.Add("/subscriptions/abcdef12-3456-7890-abcd-ef1234567890")
- [System.Collections.Generic.List[string]]$mgGroup = @()
- $mgGroup.Add("/providers/Microsoft.Management/managementGroups/abcdef12-3456-7890-abcd-ef1234567890")
-
- [System.Collections.Generic.List[String]]$access = @()
- $access.Add("Connectivity");
- $access.Add("SecurityAdmin");
-
- $scope = New-AzNetworkManagerScope -Subscription $subGroup -ManagementGroup $mgGroup
-
- ```
+```azurepowershell
+
+Import-Module -Name Az.Network -RequiredVersion "5.3.0"
+
+[System.Collections.Generic.List[string]]$subGroup = @()
+$subGroup.Add("/subscriptions/<subscription_id>")
+
+[System.Collections.Generic.List[String]]$access = @()
+$access.Add("Connectivity");
+
+$scope = New-AzNetworkManagerScope -Subscription $subGroup
+
+```
+## Create Virtual Network Manager
-1. Create the Virtual Network Manager with New-AzNetworkManager. This example creates an Azure Virtual Network Manager named **myAVNM** in the West US location.
+Create the Virtual Network Manager with [New-AzNetworkManager](/powershell/module/az.network/new-aznetworkmanager). This example creates an Azure Virtual Network Manager named **vnm-learn-eastus-001** in the East Us location.
- ```azurepowershell-interactive
- $avnm = @{
- Name = 'myAVNM'
- ResourceGroupName = $rg.Name
- NetworkManagerScope = $scope
- NetworkManagerScopeAccess = $access
- Location = $location
- }
- $networkmanager = New-AzNetworkManager @avnm
- ```
+```azurepowershell
+$avnm = @{
+ Name = 'vnm-learn-eastus-001'
+ ResourceGroupName = $rg.Name
+ NetworkManagerScope = $scope
+ NetworkManagerScopeAccess = $access
+ Location = $location
+}
+$networkmanager = New-AzNetworkManager @avnm
+```
## Create three virtual networks
-Create three virtual networks with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates virtual networks named **VNetA**, **VNetB** and **VNetC** in the **West US** location. If you already have virtual networks you want create a mesh network with, you can skip to the next section.
+Create three virtual networks with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates virtual networks named **vnet-learn-prod-eastus-001**, **vnet-learn-prod-eastus-002** and **vnet-learn-test-eastus-003** in the **East Us** location. If you already have virtual networks you want create a mesh network with, you can skip to the next section.
-```azurepowershell-interactive
-$vnetA = @{
- Name = 'VNetA'
- ResourceGroupName = 'myAVNMResourceGroup'
+```azurepowershell
+$vnet001 = @{
+ Name = 'vnet-learn-prod-eastus-001'
+ ResourceGroupName = $rg.Name
Location = $location AddressPrefix = '10.0.0.0/16' }
-$virtualNetworkA = New-AzVirtualNetwork @vnetA
+$vnet_learn_prod_eastus_001 = New-AzVirtualNetwork @vnet001
-$vnetB = @{
- Name = 'VNetB'
- ResourceGroupName = 'myAVNMResourceGroup'
+$vnet002 = @{
+ Name = 'vnet-learn-prod-eastus-002'
+ ResourceGroupName = $rg.Name
Location = $location AddressPrefix = '10.1.0.0/16' }
-$virtualNetworkB = New-AzVirtualNetwork @vnetB
+$vnet_learn_prod_eastus_002 = New-AzVirtualNetwork @vnet002
-$vnetC = @{
- Name = 'VNetC'
- ResourceGroupName = 'myAVNMResourceGroup'
+$vnet003 = @{
+ Name = 'vnet-learn-test-eastus-003'
+ ResourceGroupName = $rg.Name
Location = $location AddressPrefix = '10.2.0.0/16' }
-$virtualNetworkC = New-AzVirtualNetwork @vnetC
+$vnet_learn_test_eastus_003 = New-AzVirtualNetwork @vnet003
``` ### Add a subnet to each virtual network
-To complete the configuration of the virtual networks, add a /24 subnet to each one. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig).
+To complete the configuration of the virtual networks, create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig) with a subnet address prefix of **/24**. Then, use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to apply the subnet configuration to the virtual network.
-```azurepowershell-interactive
-$subnetA = @{
+```azurepowershell
+$subnet_vnet001 = @{
Name = 'default'
- VirtualNetwork = $virtualNetworkA
+ VirtualNetwork = $vnet_learn_prod_eastus_001
AddressPrefix = '10.0.0.0/24' }
-$subnetConfigA = Add-AzVirtualNetworkSubnetConfig @subnetA
-$virtualnetworkA | Set-AzVirtualNetwork
+$subnetConfig_vnet001 = Add-AzVirtualNetworkSubnetConfig @subnet_vnet001
+$vnet_learn_prod_eastus_001 | Set-AzVirtualNetwork
-$subnetB = @{
+$subnet_vnet002 = @{
Name = 'default'
- VirtualNetwork = $virtualNetworkB
+ VirtualNetwork = $vnet_learn_prod_eastus_002
AddressPrefix = '10.1.0.0/24' }
-$subnetConfigC = Add-AzVirtualNetworkSubnetConfig @subnetB
-$virtualnetworkB | Set-AzVirtualNetwork
+$subnetConfig_vnet002 = Add-AzVirtualNetworkSubnetConfig @subnet_vnet002
+$vnet_learn_prod_eastus_002 | Set-AzVirtualNetwork
-$subnetC = @{
+$subnet_vnet003 = @{
Name = 'default'
- VirtualNetwork = $virtualNetworkC
+ VirtualNetwork = $vnet_learn_test_eastus_003
AddressPrefix = '10.2.0.0/24' }
-$subnetConfigC = Add-AzVirtualNetworkSubnetConfig @subnetC
-$virtualnetworkC | Set-AzVirtualNetwork
+$subnetConfig_vnet003 = Add-AzVirtualNetworkSubnetConfig @subnet_vnet003
+$vnet_learn_test_eastus_003 | Set-AzVirtualNetwork
``` ## Create a network group
+Virtual Network Manager applies configurations to groups of VNets by placing them in network groups. Create a network group with [New-AzNetworkManagerGroup](/powershell/module/az.network/new-aznetworkmanagergroup). This example creates a network group named **ng-learn-prod-eastus-001** in the **East Us** location.
+
+```azurepowershell
+$ng = @{
+ Name = 'ng-learn-prod-eastus-001'
+ ResourceGroupName = $rg.Name
+ NetworkManagerName = $networkManager.Name
+ }
+ $ng = New-AzNetworkManagerGroup @ng
+```
-1. Create a network group to add virtual networks to.
+## Define membership for a mesh configuration
- ```azurepowershell-interactive
- $ng = @{
- Name = 'myNetworkGroup'
- ResourceGroupName = $rg.Name
- NetworkManagerName = $networkManager.Name
- }
- $networkgroup = New-AzNetworkManagerGroup @ng
+Once your network group is created, you define a network group's membership by adding virtual networks. Choose one of the options to define network group membership:
+
+- Add membership manually
+- Create a policy for dynamic membership
+# [Manual membership](#tab/manualmembership)
+
+### Add membership manually
+
+In this task, you add the static members **vnet-learn-prod-eastus-001** and **vnet-learn-prod-eastus-002** to the network group **ng-learn-prod-eastus-001** using [New-AzNetworkManagerStaticMember](/powershell/module/az.network/new-aznetworkmanagerstaticmember).
+
+> [!NOTE]
+> Static members must have a network group scoped unique name. It's recommended to use a consistent hash of the virtual network ID. This is an approach using the ARM Templates uniqueString() implementation.
+
+```azurepowershell
+ function Get-UniqueString ([string]$id, $length=13)
+ {
+ $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
+ -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
+ }
+```
+
+```azurepowershell
+$sm_vnet001 = @{
+ Name = Get-UniqueString $vnet_learn_prod_eastus_001.Id
+ ResourceGroupName = $rg.Name
+ NetworkGroupName = $ng.Name
+ NetworkManagerName = $networkManager.Name
+ ResourceId = $vnet_learn_prod_eastus_001.Id
+ }
+ $sm_vnet001 = New-AzNetworkManagerStaticMember @sm_vnet001
+```
+
+```azurepowershell
+$sm_vnet002 = @{
+ Name = Get-UniqueString $vnet_learn_prod_eastus_002.Id
+ ResourceGroupName = $rg.Name
+ NetworkGroupName = $ng.Name
+ NetworkManagerName = $networkManager.Name
+ ResourceId = $vnet_learn_prod_eastus_002.Id
+ }
+ $sm_vnet002 = New-AzNetworkManagerStaticMember @sm_vnet002
+```
+
+# [Azure Policy](#tab/azurepolicy)
+
+### Create a policy for dynamic membership
+
+Using [Azure Policy](concept-azure-policy-integration.md), you define a condition to dynamically add two virtual networks to your network group when the name of the virtual network name includes **-prod** in the virtual network name.
+
+> [!NOTE]
+> It is recommended to scope all of your conditionals to only scan for type `Microsoft.Network/virtualNetworks` for efficiency.
+
+1. Define the conditional statement and store it in a variable.
+
+ ```azurepowershell
+ $conditionalMembership = '{
+ "if": {
+ "allOf": [
+ {
+ "field": "type",
+ "equals": "Microsoft.Network/virtualNetworks"
+ },
+ {
+ "field": "name",
+ "contains": "prod"
+ }
+ ]
+ },
+ "then": {
+ "effect": "addToNetworkGroup",
+ "details": {
+ "networkGroupId": "/subscriptions/<subscription_id>/resourceGroups/rg-learn-eastus-001/providers/Microsoft.Network/networkManagers/vnm-learn-eastus-001/networkGroups/ng-learn-prod-eastus-001"}
+ },
+ }'
+
```
-### Option 1: Static membership
-
-1. Add the static member to the network group with the following commands:
- 1. Static members must have a network group scoped unique name. It's recommended to use a consistent hash of the virtual network ID. This is an approach using the ARM Templates uniqueString() implementation.
-
- ```azurepowershell-interactive
- function Get-UniqueString ([string]$id, $length=13)
+1. Create the Azure Policy definition using the conditional statement defined in the last step using [New-AzPolicyDefinition](/powershell/module/az.resources/new-azpolicydefinition). In this example, the policy definition name is prefixed with **poldef-learn-prod-** and suffixed with a unique string generated from a consistent hash the network group ID. Policy resources must have a scope unique name.
+
+ ```azurepowershell
+ function Get-UniqueString ([string]$id, $length=13)
{ $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray()) -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') }) }
- ```
-
- ```azurepowershell-interactive
- $smA = @{
- Name = Get-UniqueString $virtualNetworkA.Id
- ResourceGroupName = $rg.Name
- NetworkGroupName = $networkGroup.Name
- NetworkManagerName = $networkManager.Name
- ResourceId = $virtualNetworkA.Id
- }
- $statimemberA = New-AzNetworkManagerStaticMember @sm
- ```
-
- ```azurepowershell-interactive
- $smB = @{
- Name = Get-UniqueString $virtualNetworkB.Id
- ResourceGroupName = $rg.Name
- NetworkGroupName = $networkGroup.Name
- NetworkManagerName = $networkManager.Name
- ResourceId = $virtualNetworkB.Id
- }
- $statimemberB = New-AzNetworkManagerStaticMember @sm
- ```
- ```azurepowershell-interactive
- $smC = @{
- Name = Get-UniqueString $virtualNetworkC.Id
- ResourceGroupName = $rg.Name
- NetworkGroupName = $networkGroup.Name
- NetworkManagerName = $networkManager.Name
- ResourceId = $virtualNetworkC.Id
- }
- $statimemberC = New-AzNetworkManagerStaticMember @sm
- ```
+ $UniqueString = Get-UniqueString $ng.Id
+ ```
-### Option 2: Dynamic membership
-
-1. Define the conditional statement and store it in a variable.
-> [!NOTE]
-> It is recommended to scope all of your conditionals to only scan for type `Microsoft.Network/virtualNetwork` for efficiency.
-
- ```azurepowershell-interactive
- $conditionalMembership = '{
- "allof":[
- {
- "field": "type",
- "equals": "Microsoft.Network/virtualNetwork"
- }
- {
- "field": "name",
- "contains": "VNet"
- }
- ]
- }'
-```
-
-1. Create the Azure Policy definition using the conditional statement defined in the last step using New-AzPolicyDefinition.
-
-> [!IMPORTANT]
-> Policy resources must have a scope unique name. It is recommended to use a consistent hash of the network group. This is an approach using the ARM Templates uniqueString() implementation.
-
- ```azurepowershell-interactive
- function Get-UniqueString ([string]$id, $length=13)
- {
- $hashArray = (new-object System.Security.Cryptography.SHA512Managed).ComputeHash($id.ToCharArray())
- -join ($hashArray[1..$length] | ForEach-Object { [char]($_ % 26 + [byte][char]'a') })
- }
- ```
-
- ```azurepowershell-interactive
- $defn = @{
- Name = Get-UniqueString $networkgroup.Id
- Mode = 'Microsoft.Network.Data'
- Policy = $conditionalMembership
- }
-
- $policyDefinition = New-AzPolicyDefinition @defn
- ```
+ ```azurepowershell
+ $polDef = @{
+ Name = "poldef-learn-prod-"+$UniqueString
+ Mode = 'Microsoft.Network.Data'
+ Policy = $conditionalMembership
+ }
+
+ $policyDefinition = New-AzPolicyDefinition @polDef
+ ```
1. Assign the policy definition at a scope within your network managers scope for it to begin taking effect.
- ```azurepowershell-interactive
- $assgn = @{
- Name = Get-UniqueString $networkgroup.Id
+ ```azurepowershell
+ $polAssign = @{
+ Name = "polassign-learn-prod-"+$UniqueString
PolicyDefinition = $policyDefinition }
- $policyAssignment = New-AzPolicyAssignment @assgn
+ $policyAssignment = New-AzPolicyAssignment @polAssign
```
-
-## Create a configuration
+
+## Create a connectivity configuration
+In this task, you create a connectivity configuration with the network group **ng-learn-prod-eastus-001** using [New-AzNetworkManagerConnectivityConfiguration](/powershell/module/az.network/new-aznetworkmanagerconnectivityconfiguration) and [New-AzNetworkManagerConnectivityGroupItem](/powershell/module/az.network/new-aznetworkmanagerconnectivitygroupitem).
-1. Create a connectivity group item to add a network group to with New-AzNetworkManagerConnectivityGroupItem.
- ```azurepowershell-interactive
+1. Create a connectivity group item.
+
+ ```azurepowershell
$gi = @{
- NetworkGroupId = $networkgroup.Id
+ NetworkGroupId = $ng.Id
} $groupItem = New-AzNetworkManagerConnectivityGroupItem @gi ```
-1. Create a configuration group and add the group item from the previous step.
+1. Create a configuration group and add connectivity group item to it.
- ```azurepowershell-interactive
- [System.Collections.Generic.List[Microsoft.Azure.Commands.Network.Models.PSNetworkManagerConnectivityGroupItem]]$configGroup = @()
+ ```azurepowershell
+ [System.Collections.Generic.List[Microsoft.Azure.Commands.Network.Models.NetworkManager.PSNetworkManagerConnectivityGroupItem]]$configGroup = @()
$configGroup.Add($groupItem) ```
-1. Create the connectivity configuration with New-AzNetworkManagerConnectivityConfiguration.
+1. Create the connectivity configuration with the configuration group.
- ```azurepowershell-interactive
+ ```azurepowershell
$config = @{
- Name = 'connectivityconfig'
+ Name = 'cc-learn-prod-eastus-001'
ResourceGroupName = $rg.Name NetworkManagerName = $networkManager.Name ConnectivityTopology = 'Mesh'
$virtualnetworkC | Set-AzVirtualNetwork
$connectivityconfig = New-AzNetworkManagerConnectivityConfiguration @config ```
-## Commit deployment
+### Commit deployment
Commit the configuration to the target regions with Deploy-AzNetworkManagerCommit. This triggers your configuration to begin taking effect.
-```azurepowershell-interactive
+```azurepowershell
[System.Collections.Generic.List[string]]$configIds = @() $configIds.add($connectivityconfig.id) [System.Collections.Generic.List[string]]$target = @()
If you no longer need the Azure Virtual Network Manager, you need to make sure a
1. Remove the connectivity deployment by deploying an empty configuration with Deploy-AzNetworkManagerCommit.
- ```azurepowershell-interactive
+ ```azurepowershell
[System.Collections.Generic.List[string]]$configIds = @() [System.Collections.Generic.List[string]]$target = @() $target.Add("westus") $removedeployment = @{
- Name = 'myAVNM'
- ResourceGroupName = 'myAVNMResourceGroup'
+ Name = 'vnm-learn-eastus-001'
+ ResourceGroupName = $rg.Name
ConfigurationId = $configIds Target = $target CommitType = 'Connectivity'
If you no longer need the Azure Virtual Network Manager, you need to make sure a
1. Remove the connectivity configuration with Remove-AzNetworkManagerConnectivityConfiguration
- ```azurepowershell-interactive
+ ```azurepowershell
- Remove-AzNetworkManagerConnectivityConfiguration @connectivityconfig.Id
+ Remove-AzNetworkManagerConnectivityConfiguration -Name $connectivityconfig.Name -ResourceGroupName $rg.Name -NetworkManagerName $networkManager.Name
``` 2. Remove the policy resources with Remove-AzPolicy*
- ```azurepowershell-interactive
+ ```azurepowershell
- Remove-AzPolicyAssignment $policyAssignment.Id
- Remove-AzPolicyAssignment $policyDefinition.Id
+ Remove-AzPolicyAssignment -Name $policyAssignment.Name
+ Remove-AzPolicyAssignment -Name $policyDefinition.Name
``` 3. Remove the network group with Remove-AzNetworkManagerGroup.
- ```azurepowershell-interactive
- Remove-AzNetworkManagerGroup $networkGroup.Id
+ ```azurepowershell
+ Remove-AzNetworkManagerGroup -Name $ng.Name -ResourceGroupName $rg.Name -NetworkManagerName $networkManager.Name
``` 4. Delete the network manager instance with Remove-AzNetworkManager.
- ```azurepowershell-interactive
- Remove-AzNetworkManager $networkManager.Id
+ ```azurepowershell
+ Remove-AzNetworkManager -name $networkManager.Name -ResourceGroupName $rg.Name
``` 5. If you no longer need the resource created, delete the resource group with [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup).
- ```azurepowershell-interactive
- Remove-AzResourceGroup -Name 'myAVNMResourceGroup'
+ ```azurepowershell
+ Remove-AzResourceGroup -Name $rg.Name -Force
``` ## Next steps
virtual-network-manager How To Block High Risk Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-high-risk-ports.md
While this article focuses on a single port, SSH, you can protect any high-risk
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
-* You understand how to an [Azure Virtual Network Manager](./create-virtual-network-manager-portal.md)
+* You understand how to create an [Azure Virtual Network Manager](./create-virtual-network-manager-portal.md)
* You understand each element in a [Security admin rule](concept-security-admins.md). * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A group of virtual networks that can be split into network groups for applying granular security admin rules.
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
# Public IP addresses
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. For guidance on upgrading, visit [Upgrading a basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md).
+ Public IP addresses allow Internet resources to communicate inbound to Azure resources. Public IP addresses enable Azure resources to communicate to Internet and public-facing Azure services. The address is dedicated to the resource, until it's unassigned by you. A resource without a public IP assigned can communicate outbound. Azure dynamically assigns an available IP address that isn't dedicated to the resource. For more information about outbound connections in Azure, see [Understand outbound connections](../../load-balancer/load-balancer-outbound-connections.md?toc=%2fazure%2fvirtual-network%2ftoc.json). In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) address is a resource that has its own properties.
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
Last updated 09/19/2022
# Upgrading a basic public IP address to Standard SKU - Guidance
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. This article will help guide you through the upgrade process.
+ In this article, we'll discuss guidance for upgrading your Basic SKU public IPs to Standard SKU. Standard public IPs are recommended for all production instances and provide many [key differences](#basic-sku-vs-standard-sku) to your infrastructure. ## Steps to complete the upgrade
virtual-network Public Ip Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-cli.md
ms.devlang: azurecli
# Upgrade a public IP address using the Azure CLI
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date.
+ Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with. In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU using the Azure CLI.
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP address using the Azure CLI](./create-public-ip-cli.md)
+- [Create a public IP address using the Azure CLI](./create-public-ip-cli.md)
virtual-network Public Ip Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-portal.md
# Upgrade a public IP address using the Azure portal
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date.
+ Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with. In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU in the Azure portal.
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
# Upgrade a public IP address using Azure PowerShell
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date.
+ Azure public IP addresses are created with a SKU, either Basic or Standard. The SKU determines their functionality including allocation method, feature support, and resources they can be associated with. In this article, you'll learn how to upgrade a static Basic SKU public IP address to Standard SKU using Azure PowerShell.
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
# Create, change, or delete an Azure public IP address
+>[!Important]
+>On September 30, 2025, Basic SKU public IPs will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/upgrade-to-standard-sku-public-ip-addresses-in-azure-by-30-september-2025-basic-sku-will-be-retired/). If you are currently using Basic SKU public IPs, make sure to upgrade to Standard SKU public IPs prior to the retirement date. For guidance on upgrading, visit [Upgrading a basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md).
+ Learn about a public IP address and how to create, change, and delete one. A public IP address is a resource with configurable settings. When you assign a public IP address to an Azure resource, you enable the following operations:
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
Branch-to-VNet is the primary path supported by Azure Virtual WAN. This path all
### ExpressRoute Global Reach and Virtual WAN
-ExpressRoute is a private and resilient way to connect your on-premises networks to the Microsoft Cloud. Virtual WAN supports Express Route circuit connections. Connecting a branch site to Virtual WAN with Express Route requires 1) Premium or Standard Circuit 2) Circuit to be in a Global Reach enabled location.
+ExpressRoute is a private and resilient way to connect your on-premises networks to the Microsoft Cloud. Virtual WAN supports Express Route circuit connections.
+The following ExpressRoute circuit SKUs can be connected to Virtual WAN: Local, Standard, and Premium.
-ExpressRoute Global Reach is an add-on feature for ExpressRoute. With Global Reach, you can link ExpressRoute circuits together to make a private network between your on-premises networks. Branches that are connected to Azure Virtual WAN using ExpressRoute require the ExpressRoute Global Reach to communicate with each other.
+ExpressRoute Global Reach is an add-on feature for ExpressRoute. With Global Reach, you can link ExpressRoute circuits together to make a private network between your on-premises networks. Branches that are connected to Azure Virtual WAN using ExpressRoute require the ExpressRoute Global Reach to communicate with each other. Global Reach is not required for transitivity between site-to-site VPN and ExpressRoute connected branches.
In this model, each branch that is connected to the virtual WAN hub using ExpressRoute can connect to VNets using the branch-to-VNet path. Branch-to-branch traffic won't transit the hub because ExpressRoute Global Reach enables a more optimal path over Azure WAN.
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 12/20/2022 Last updated : 04/12/2023
In this tutorial, you learn how to:
The following diagram shows the virtual network and the VPN gateway created as part of this tutorial. ## Prerequisites
vpn-gateway Tutorial Protect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-protect-vpn-gateway.md
In this tutorial, you learn how to:
The following diagram shows the virtual network and the VPN gateway created as part of this tutorial. ## Prerequisites