Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Authentication National Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-national-cloud.md | |
active-directory | Desktop Quickstart Portal Wpf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/desktop-quickstart-portal-wpf.md | -# Quickstart: Acquire a token and call the Microsoft Graph API from a Windows desktop app +# Quickstart: Sign in users and call Microsoft Graph in a Windows desktop app > [!div renderon="docs"] > Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:+> ## Quickstart: Acquire a token and call the Microsoft Graph API from a Windows desktop application +> > In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API. > > See [How the sample works](#how-the-sample-works) for an illustration. |
active-directory | Msal Android Shared Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md | These Microsoft applications support Azure AD's shared device mode: - [Microsoft Teams](/microsoftteams/platform/) - [Microsoft Managed Home Screen](/mem/intune/apps/app-configuration-managed-home-screen-app) app for Android Enterprise - [Microsoft Edge](/microsoft-edge) (in Public Preview)+- [Outlook](/mem/intune/apps/app-configuration-policies-outlook) (in Public Preview) - [Microsoft Power Apps](/power-apps) (in Public Preview) - [Yammer](/yammer) (in Public Preview) |
active-directory | Msal Shared Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-shared-devices.md | ->[!IMPORTANT] +> [!IMPORTANT] > Shared device mode for iOS [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)] - ### Supporting multiple users on devices designed for one user -Because mobile devices running iOS or Android were designed for single users, most applications optimize their experience for use by a single user. Part of this optimized experience means enabling single sign-on across applications and keeping users signed in on their device. When a user removes their account from an application, the app typically doesn't consider it a security-related event. Many apps even keep a user's credentials around for quick sign-in. You may even have experienced this yourself when you've deleted an application from your mobile device and then reinstalled it, only to discover you're still signed in. +Because mobile devices running iOS or Android were designed for single users, most applications optimize their experience for use by a single user. Part of this optimized experience means enabling single sign-on (SSO) across applications and keeping users signed in on their device. When a user removes their account from an application, the app typically doesn't consider it a security-related event. Many apps even keep a user's credentials around for quick sign-in. You may even have experienced this yourself when you've deleted an application from your mobile device and then reinstalled it, only to discover you're still signed in. ### Automatic single sign-in and single sign-out -To allow an organization's employees to use its apps across a pool of devices shared by those employees, developers need to enable the opposite experience. Employees should be able to pick a device from the pool and perform a single gesture to "make it theirs" for the duration of their shift. At the end of their shift, they should be able to perform another gesture to sign out globally on the device, with all their personal and company information removed so they can return it to the device pool. Furthermore, if an employee forgets to sign out, the device should be automatically signed out at the end of their shift and/or after a period of inactivity. +To allow an organization's employees to use its apps across a pool of devices shared by those employees, developers need to enable the opposite experience. Employees should be able to pick a device from the pool and perform a single gesture to "make it theirs" during their shift. At the end of their shift, they should be able to perform another gesture to sign out globally on the device, with all their personal and company information removed so they can return it to the device pool. Furthermore, if an employee forgets to sign out, the device should be automatically signed out at the end of their shift and/or after a period of inactivity. -Azure Active Directory enables these scenarios with a feature called **shared device mode**. +Azure AD enables these scenarios with a feature called **shared device mode**. ## Introducing shared device mode -As mentioned, shared device mode is a feature of Azure Active Directory that enables you to: +As mentioned, shared device mode is a feature of Azure AD that enables you to: -* Build applications that support frontline workers -* Deploy devices to frontline workers with apps that support shared device mode. +- Build applications that support frontline workers +- Deploy devices to frontline workers with apps that support shared device mode. ### Build applications that support frontline workers -You can support frontline workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to enable a device state called *shared device mode*. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data. +You can support frontline workers in your applications by using the Microsoft Authentication Library (MSAL) and [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) to enable a device state called _shared device mode_. When a device is in shared device mode, Microsoft provides your application with information to allow it to modify its behavior based on the state of the user on the device, protecting user data. Supported features are: -* **Sign in a user device-wide** through any supported application. -* **Sign out a user device-wide** through any supported application. -* **Query the state of the device** to determine if your application is on a device that's in shared device mode. -* **Query the device state of the user** on the device to determine if anything has changed since the last time your application was used. +- **Sign in a user device-wide** through any supported application. +- **Sign out a user device-wide** through any supported application. +- **Query the state of the device** to determine if your application is on a device that's in shared device mode. +- **Query the device state of the user** on the device to determine if anything has changed since the last time your application was used. Supporting shared device mode should be considered a feature upgrade for your application, and can help increase its adoption in environments where the same device is used among multiple users. -Your users depend on you to ensure their data isn't leaked to another user. Share Device Mode provides helpful signals to indicate to your application that a change you should manage has occurred. Your application is responsible for checking the state of the user on the device every time the app is used, clearing the previous user's data. This includes if it is reloaded from the background in multi-tasking. On a user change, you should ensure both the previous user's data is cleared and that any cached data being displayed in your application is removed. +Your users depend on you to ensure their data isn't leaked to another user. Share Device Mode provides helpful signals to indicate to your application that a change you should manage has occurred. Your application is responsible for checking the state of the user on the device every time the app is used, clearing the previous user's data. This includes if it's reloaded from the background in multi-tasking. On a user change, you should ensure both the previous user's data is cleared and that any cached data being displayed in your application is removed. To support all data loss prevention scenarios, we also recommend you integrate with the [Intune App SDK](/mem/intune/developer/app-sdk). By using the Intune App SDK, you can allow your application to support Intune [App Protection Policies](/mem/intune/apps/app-protection-policy). In particular, we recommend that you integrate with Intune's [selective wipe](/mem/intune/developer/app-sdk-android-phase5#selective-wipe) capabilities and [deregister the user on iOS](/mem/intune/developer/app-sdk-ios#deregister-user-accounts) during a sign-out. For details on how to modify your applications to support shared device mode, se Once your applications support shared device mode and include the required data and security changes, you can advertise them as being usable by frontline workers. -An organization's device administrators are able to deploy their devices and your applications to their stores and workplaces through a mobile device management (MDM) solution like Microsoft Intune. Part of the provisioning process is marking the device as a *Shared Device*. Administrators configure shared device mode by deploying the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) and setting shared device mode through configuration parameters. After performing these steps, all applications that support shared device mode will use the Microsoft Authenticator application to manage its user state and provide security features for the device and organization. +An organization's device administrators are able to deploy their devices and your applications to their stores and workplaces through a mobile device management (MDM) solution like Microsoft Intune. Part of the provisioning process is marking the device as a _Shared Device_. Administrators configure shared device mode by deploying the [Microsoft Authenticator app](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc) and setting shared device mode through configuration parameters. After performing these steps, all applications that support shared device mode will use the Microsoft Authenticator application to manage its user state and provide security features for the device and organization. ### Use App Protection Policies to provide data loss prevention between users.+ For data protection capabilities along with shared device mode, Microsoft’s supported data protection solution for Microsoft 365 applications on Android and iOS is Microsoft Intune Application Protection Policies. For more information about the policies, see [App protection policies overview - Microsoft Intune | Microsoft Learn](/mem/intune/apps/app-protection-policy). -When setting up App protection policies for shared devices, we recommend using [level 2 enterprise enhanced data protection](/mem/intune/apps/app-protection-framework#level-2-enterprise-enhanced-data-protection). With level 2 data protection, you can restrict data transfer scenarios that may cause data to move to parts of the device that are not cleared with shared device mode. +When setting up App protection policies for shared devices, we recommend using [level 2 enterprise enhanced data protection](/mem/intune/apps/app-protection-framework#level-2-enterprise-enhanced-data-protection). With level 2 data protection, you can restrict data transfer scenarios that may cause data to move to parts of the device that aren't cleared with shared device mode. ## Next steps -We support iOS and Android platforms for shared device mode. Review the documentation below for your platform to begin supporting frontline workers in your applications. +We support iOS and Android platforms for shared device mode. For more information, see: -* [Supporting shared device mode for iOS](msal-ios-shared-devices.md) -* [Supporting shared device mode for Android](msal-android-shared-devices.md) +- [Supporting shared device mode for iOS](msal-ios-shared-devices.md) +- [Supporting shared device mode for Android](msal-android-shared-devices.md) |
active-directory | Service Accounts Governing Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-governing-azure.md | Title: Governing Azure Active Directory service accounts description: Principles and procedures for managing the lifecycle of service accounts in Azure Active Directory. -+ Previously updated : 08/19/2022 Last updated : 02/06/2023 -# Governing Azure AD service accounts +# Governing Azure Active Directory service accounts -There are three types of service accounts in Azure Active Directory (Azure AD): [managed identities](service-accounts-managed-identities.md), [service principals](service-accounts-principal.md), and user accounts employed as service accounts. As you create these service accounts for automated use, they're granted permissions to access resources in Azure and Azure AD. Resources can include Microsoft 365 services, software as a service (SaaS) applications, custom applications, databases, HR systems, and so on. Governing Azure AD service accounts means that you manage their creation, permissions, and lifecycle to ensure security and continuity. +There are three types of service accounts in Azure Active Directory (Azure AD): managed identities, service principals, and user accounts employed as service accounts. When you create service accounts for automated use, they're granted permissions to access resources in Azure and Azure AD. Resources can include Microsoft 365 services, software as a service (SaaS) applications, custom applications, databases, HR systems, and so on. Governing Azure AD service account is managing creation, permissions, and lifecycle to ensure security and continuity. -> [!IMPORTANT] -> We do not recommend using user accounts as service accounts as they are inherently less secure. This includes on-premises service accounts that are synced to Azure AD, as they are not converted to service principals. Instead, we recommend the use of managed identities or service principals. Note that at this time the use of conditional access policies with service principals is called Conditional Access for workload identities and it's in public preview. +Learn more: +* [Securing managed identities](service-accounts-managed-identities.md) +* [Securing service principals](service-accounts-principal.md) ++> [!NOTE] +> We do not recommend user accounts as service accounts because they are less secure. This includes on-premises service accounts synced to Azure AD, because they aren't converted to service principals. Instead, we recommend managed identities, or service principals, and the use of Conditional Access. ++[What is Conditional Access?](../conditional-access/overview.md) ## Plan your service account -Before creating a service account, or registering an application, document the service accountΓÇÖs key information. Having information documented makes it easier to effectively monitor and govern the account. We recommend collecting the following data and tracking it in your centralized Configuration Management Database (CMDB). +Before creating a service account, or registering an application, document the service account key information. Use the information to monitor and govern the account. We recommend collecting the following data and tracking it in your centralized Configuration Management Database (CMDB). | Data| Description| Details | | - | - | - |-| Owner| User or group that is accountable for managing and monitoring the service account.| Provision the owner with necessary permissions to monitor the account and implement a way to mitigate issues. Issue mitigation may be done by the owner, or via a request to IT. | -| Purpose| How the account will be used.| Map the service account to a specific service, application, or script. Avoid creating multi-use service accounts. | -| Permissions (Scopes)| Anticipated set of permissions.| Document the resources it will access and the permissions to those resources. | -| CMDB Link| Link to the resources to be accessed, and scripts in which the service account is used.| Ensure you document the resource and script owners so that you can communicate any necessary upstream and downstream effects of changes. | -| Risk assessment| Risk and business impact if the account were to be compromised.| Use this information to narrow the scope of permissions and determine who should have access to the account information. | -| Period for review| The schedule on which the service account is to be reviewed by the owner.| Use this to schedule review communications and reviews. Document what should happen if a review is not performed by a specific time after the scheduled review period. | -| Lifetime| Anticipated maximum lifetime of account.| Use this to schedule communications to the owner, and to ultimately disable then delete the accounts. Where possible, set an expiration date for credentials, where credentials cannot be rolled over automatically. | -| Name| Standardized name of account| Create a naming schema for all service accounts so that you can easily search, sort, and filter on service accounts. | +| Owner| User or group accountable for managing and monitoring the service account| Grant the owner permissions to monitor the account and implement a way to mitigate issues. Issue mitigation is done by the owner, or by request to an IT team. | +| Purpose| How the account is used| Map the service account to a service, application, or script. Avoid creating multi-use service accounts. | +| Permissions (Scopes)| Anticipated set of permissions| Document the resources it accesses and permissions for those resources | +| CMDB Link| Link to the accessed resources, and scripts in which the service account is used| Document the resource and script owners to communicate the effects of change | +| Risk assessment| Risk and business effect, if the account is compromised|Use the information to narrow the scope of permissions and determine access to information | +| Period for review| The cadence of service account reviews, by the owner| Review communications and reviews. Document what happens if a review is performed after the scheduled review period. | +| Lifetime| Anticipated maximum account lifetime| Use this measurement to schedule communications to the owner, disable, and then delete the accounts. Set an expiration date for credentials that prevents them from rolling over automatically. | +| Name| Standardized account name| Create a naming convention for service accounts to search, sort, and filter them | -## Use the principle of least privileges -Grant the service account only the permissions necessary to perform its tasks, and no more. If a service account needs high-level permissions, for example a global administrator level of privilege, evaluate why and try to reduce the necessary permissions. +## Principle of least privileges +Grant the service account permissions needed to perform tasks, and no more. If a service account needs high-level permissions, for example a Global Administrator, evaluate why and try to reduce permissions. We recommend the following practices for service account privileges. -**Permissions** --* Do not assign built-in roles to service accounts. Instead, use the [OAuth2 permission grant model for Microsoft Graph](/graph/api/resources/oauth2permissiongrant), --* If the service principal must be assigned a privileged role, consider assigning a [custom role](../roles/custom-create.md) with specific, required privileged, in a time-bound fashion. --* Do not include service accounts as members of any groups with elevated permissions. --* [Use PowerShell to enumerate members of privileged roles](/powershell/module/azuread/get-azureaddirectoryrolemember), such as -ΓÇÄ`Get-AzureADDirectoryRoleMember`, and filter for objectType "Service Principal". -- or use -ΓÇÄ `Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }` --* [Use OAuth 2.0 scopes](../develop/v2-permissions-and-consent.md) to limit the functionality a service account can access on a resource. -* Service principals and managed identities can use OAuth 2.0 scopes in either a delegated context that is impersonating a signed-on user, or as service account in the application context. In the application context no is signed-on. --* Check the scopes service accounts request for resources to ensure they're appropriate. For example, if an account is requesting Files.ReadWrite.All, evaluate if it actually needs only File.Read.All. For more information on permissions, see the [Overview of Microsoft Graph permissions](/graph/permissions-overview). +### Permissions -* Ensure you trust the developer of the application or API with the access requested to your resources. +* Don't assign built-in roles to service accounts + * Instead, use the [OAuth2 permission grant model for Microsoft Graph](/graph/api/resources/oauth2permissiongrant) +* The service principal is assigned a privileged role + * [Create and assign a custom role in Azure Active Directory](../roles/custom-create.md) +* Don't include service accounts as members of any groups with elevated permissions +* [Use PowerShell to enumerate members of privileged roles](/powershell/module/azuread/get-azureaddirectoryrolemember): + +>`Get-AzureADDirectoryRoleMember`, and filter for objectType "Service Principal", or use</br> +>`Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }` -**Duration** +* [Use OAuth 2.0 scopes](../develop/v2-permissions-and-consent.md) to limit the functionality a service account can access on a resource +* Service principals and managed identities can use OAuth 2.0 scopes in a delegated context impersonating a signed-on user, or as service account in the application context. In the application context, no one is signed in. +* Confirm the scopes service accounts request for resources + * If an account requests Files.ReadWrite.All, evaluate if it needs File.Read.All + * [Overview of Microsoft Graph permissions](/graph/permissions-reference) +* Ensure you trust the application developer, or API, with the requested access -* Limit service account credentials (client secret, certificate) to an anticipated usage period. +### Duration -* Schedule periodic reviews the use and purpose of service accounts. Ensure reviews are conducted prior to expiration of the account. +* Limit service account credentials (client secret, certificate) to an anticipated usage period +* Schedule periodic reviews of service account usage and purpose + * Ensure reviews occur prior to account expiration -Once you have a clear understanding of the purpose, scope, and necessary permissions, create your service account. +After you understand the purpose, scope, and permissions, create your service account, use the instructions in the following articles. -[Create and use managed identities](../../app-service/overview-managed-identity.md?tabs=dotnet) +* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet) +* [Create an Azure Active Directory application and service principal that can access resources](../develop/howto-create-service-principal-portal.md) -[Create and use service principals](../develop/howto-create-service-principal-portal.md) --Use a managed identity when possible. If you cannot use a managed identity, use a service principal. If you cannot use a service principal, then and only then use an Azure AD user account. -- +Use a managed identity when possible. If you can't use a managed identity, use a service principal. If you can't use a service principal, then use an Azure AD user account. ## Build a lifecycle process -Managing the lifecycle of a service account starts with planning and ends with its permanent deletion. --This article has previously covered the planning and creation portion. You must also monitor, review permissions, determine an account's continued usage, and ultimately deprovision the account. +A service account lifecycle starts with planning, and ends with permanent deletion. The following sections cover how you monitor, review permissions, determine continued account usage, and ultimately deprovision the account. ### Monitor service accounts -Proactively monitor your service accounts to ensure the service accountΓÇÖs usage patterns reflects the intended patterns and that the service account is still actively used. +Monitor your service accounts to ensure usage patterns are correct, and that the service account is used. -**Collect and monitor service account sign-ins using one of the following methods:** +#### Collect and monitor service account sign-ins -* Using the Azure AD Sign-In Logs in the Azure AD Portal. +Use one of the following monitoring methods: -* Exporting the Azure AD Sign-In Logs to [Azure Storage](../../storage/index.yml), [Azure Event Hubs](../../event-hubs/index.yml), or [Azure Monitor](../../azure-monitor/logs/data-platform-logs.md). +* Azure AD Sign-In Logs in the Azure AD portal +* Export the Azure AD Sign-In Logs to + * [Azure Storage](../../storage/index.yml) + * [Azure Event Hubs](../../event-hubs/index.yml), or + * [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md) +Use the following screenshot to see service principal sign-ins. - + -**Intelligence that you should look for in the Sign-In logs includes:** +#### Sign-in log details -* Are there service accounts that no longer sign in to the tenant? +Look for the following details in sign-in logs. -* Are sign-in patterns of service accounts changing? +* Service accounts not signed in to the tenant +* Changes in sign-in service account patterns -We recommend you export Azure AD sign-In logs and import them into your existing Security Information and Event Management (SIEM) tools such as Microsoft Sentinel. Use your SIEM to build alerting and dashboards. +We recommend you export Azure AD sign-in logs, and then import them into a security information and event management (SIEM) tool, such as Microsoft Sentinel. Use the SIEM tool to build alerts and dashboards. ### Review service account permissions -Regularly review the permissions granted and scopes accessed by service accounts to see if they can be reduced eliminated. --* Use [PowerShell](/powershell/module/azuread/get-azureadserviceprincipaloauth2permissiongrant) to [build automation for checking and documenting](https://gist.github.com/psignoret/41793f8c6211d2df5051d77ca3728c09) scopes to which consent is granted to a service account. --* Use PowerShell to [review existing service principals' credentials](https://github.com/AzureAD/AzureADAssessment) and check their validity. +Regularly review service account permissions and accessed scopes to see if they can be reduced or eliminated. -* Do not set service principalΓÇÖs credentials to "Never expire". +* Use [PowerShell](/powershell/module/azuread/get-azureadserviceprincipaloauth2permissiongrant) to [build automation to check and document](https://gist.github.com/psignoret/41793f8c6211d2df5051d77ca3728c09) scopes for service account +* Use PowerShell to [review service principal credentials](https://github.com/AzureAD/AzureADAssessment) and confirm validity +* Don't set service principal credentials to **Never expire** +* Use certificates or credentials stored in Azure Key Vault, when possible + * [what is Azure Key Vault?](../../key-vault/general/basic-concepts.md) -* Use certificates or credentials stored in Azure KeyVault where possible. --Microsoft's free PowerShell sample collects service principalΓÇÖs OAuth2 grants and credential information, records them in a comma-separated values file (CSV), and a Power BI sample dashboard to interpret and use the data. for more information, see [AzureAD/AzureADAssessment: Tooling for assessing an Azure AD tenant state and configuration (github.com)](https://github.com/AzureAD/AzureADAssessment) +The free PowerShell sample collects service principal OAuth2 grants and credential information, records them in a comma-separated values (CSV) file, and a Power BI sample dashboard. For more information, see [Microsoft Azure AD Assessment, Assessor Guide](https://github.com/AzureAD/AzureADAssessment). ### Recertify service account use -Establish a review process to ensure that service accounts are regularly reviewed by their owners and the security or IT team at regular intervals. --**The process should include:** --* How to determine each service accountsΓÇÖ review cycle (should be documented in your CMDB). --* The communications to owner and security or IT teams before reviews start. +Establish a regular review process to ensure service accounts are regularly reviewed by owners, security team, or IT team. -* The timing and content of warning communications if the review is missed. +The process includes: -* Instructions on what to do if the owners fail to review or respond. For example, you may want to disable (but not delete) the account until the review is complete. +* Determine service account review cycle, and document it in your CMDB +* Communications to owner, security team, IT team, before a review +* Determine warning communications, and their timing, if the review is missed +* Instructions if owners fail to review or respond + * Disable, but don't delete, the account until the review is complete +* Instructions to determine dependencies. Notify resource owners of effects -* Instructions on determining upstream and downstream dependencies and notifying other resource owners of any effects. +The review includes the owner and an IT partner, and they certify: -**The review should include the owner and their IT partner certifying that:** --* The account is still necessary. --* The permissions granted to the account are adequate and necessary, or a change is requested. --* The access to the account and its credentials is controlled. --* The credentials the account uses are appropriate, in respect to the risk the account was assessed with (both credential type and credential lifetime) --* The accountΓÇÖs risk scoring hasn't changed since the last recertification --* An update on the expected lifetime of the account, and the next recertification date. +* Account is necessary +* Permissions to the account are adequate and necessary, or a change is requested +* Access to the account, and its credentials, are controlled +* Account credentials are accurate: credential type and lifetime +* Account risk score hasn't changed since the previous recertification +* Update the expected account lifetime, and the next recertification date ### Deprovision service accounts -**Deprovision service accounts under the following circumstances:**** --* The script or application the service account was created for is retired. --* The function within the script or application the service account is used for (for example, access to a specific resource) is retired. -+Deprovision service accounts under the following circumstances: -* The service account is replaced with a different service account. +* Account script or application is retired +* Account script or application function is retired. For example, access to a resource. +* Service account is replaced by another service account +* Credentials expired, or the account is non-functional, and there arenΓÇÖt complaints -* The credentials expired, or the account is otherwise non-functional, and there arenΓÇÖt any complaints. +Deprovisioning includes the following tasks: -**The processes for deprovisioning should include the following tasks.** +After the associated application or script is deprovisioned: -1. Once the associated application or script is deprovisioned, [monitor sign-ins](../reports-monitoring/concept-sign-ins.md) and resource access by the service account. -- * If the account still is active, determine how it's being used before taking subsequent steps. - -2. If this is a managed service identity, then disable the service account from signing in, but don't remove it from the directory. --3. Revoke role assignments and OAuth2 consent grants for the service account. --4. After a defined period, and ample warning to owners, delete the service account from the directory. +* [Monitor sign-ins](../reports-monitoring/concept-sign-ins.md) and resource access by the service account + * If the account is active, determine how it's being used before continuing +* For a managed service identity, disable service account sign-in, but don't remove it from the directory +* Revoke service account role assignments and OAuth2 consent grants +* After a defined period, and warning to owners, delete the service account from the directory ## Next steps-For more information on securing Azure service accounts, see: --[Introduction to Azure service accounts](service-accounts-introduction-azure.md) --[Securing managed identities](service-accounts-managed-identities.md) --[Securing service principles](service-accounts-principal.md) --- - +* [Securing cloud-based service accounts](service-accounts-introduction-azure.md) +* [Securing managed identities](service-accounts-managed-identities.md) +* [Securing service principal](service-accounts-principal.md) |
active-directory | Service Accounts Group Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-group-managed.md | Title: Secure group managed service accounts | Azure Active Directory -description: A guide to securing group managed service account (gMSA) computer accounts. --+ Title: Secure group managed service accounts +description: A guide to securing group managed service accounts (gMSAs) + Previously updated : 08/20/2022 Last updated : 02/06/2023 -Group managed service accounts (gMSAs) are managed domain accounts that you use to help secure services. gMSAs can run on a single server or on a server farm, such as systems behind a network load balancing or Internet Information Services (IIS) server. After you configure your services to use a gMSA principal, password management for that account is handled by the Windows operating system. +Group managed service accounts (gMSAs) are domain accounts to help secure services. gMSAs can run on one server, or in a server farm, such as systems behind a network load balancing or Internet Information Services (IIS) server. After you configure your services to use a gMSA principal, account password management is handled by the Windows operating system (OS). -## Benefits of using gMSAs +## Benefits of gMSAs -gMSAs offer a single identity solution with greater security. At the same time, to help reduce administrative overhead, they: +gMSAs are an identity solution with greater security that help reduce administrative overhead: -* **Set strong passwords**: gMSAs use 240-byte, randomly generated complex passwords. The complexity and length of gMSA passwords minimizes the likelihood of a service getting compromised by brute force or dictionary attacks. +* **Set strong passwords** - 240-byte, randomly generated passwords: the complexity and length of gMSA passwords minimizes the likelihood of compromise by brute force or dictionary attacks +* **Cycle passwords regularly** - password management goes to the Windows OS, which changes the password every 30 days. Service and domain administrators don't need to schedule password changes, or manage service outages. +* **Support deployment to server farms** - deploy gMSAs to multiple servers to support load balanced solutions where multiple hosts run the same service +* **Support simplified service principal name (SPN) management** - set up an SPN with PowerShell, when you create an account. + * In addition, services that support automatic SPN registrations might do so against the gMSA, if the gMSA permissions are set correctly. -* **Cycle passwords regularly**: gMSAs shift password management to the Windows operating system, which changes the password every 30 days. Service and domain administrators no longer need to schedule password changes or manage service outages to help keep service accounts secure. +## Using gMSAs -* **Support deployment to server farms**: The ability to deploy gMSAs to multiple servers allows for the support of load balanced solutions where multiple hosts run the same service. --* **Support simplified service principal name (SPN) management**: You can set up an SPN by using PowerShell when you create an account. In addition, services that support automatic SPN registrations might do so against the gMSA, provided that the gMSA permissions are correctly set. --## When to use gMSAs --Use gMSAs as the preferred account type for on-premises services unless a service, such as Failover Clustering, doesn't support it. +Use gMSAs as the account type for on-premises services unless a service, such as failover clustering, doesn't support it. > [!IMPORTANT]-> You must test your service with gMSAs before you deploy it into production. To do so, set up a test environment to ensure that the application can use the gMSA, and then access the resources it needs to access. For more information, see [Support for group managed service accounts](/system-center/scom/support-group-managed-service-accounts). -+> Test your service with gMSAs before it goes to production. Set up a test environment to ensure the application uses the gMSA, then accesses resources. For more information, see [Support for group managed service accounts](/system-center/scom/support-group-managed-service-accounts?view=sc-om-2022&preserve-view=true). -If a service doesn't support the use of gMSAs, your next best option is to use a standalone managed service account (sMSA). An sMSA provides the same functionality as a gMSA, but it's intended for deployment on a single server only. +If a service doesn't support gMSAs, you can use a standalone managed service account (sMSA). An sMSA has the same functionality, but is intended for deployment on a single server. -If you can't use a gMSA or sMSA that's supported by your service, you must configure the service to run as a standard user account. Service and domain administrators are required to observe strong password management processes to help keep the account secure. +If you can't use a gMSA or sMSA supported by your service, configure the service to run as a standard user account. Service and domain administrators are required to observe strong password management processes to help keep the account secure. -## Assess the security posture of gMSAs +## Assess gSMA security posture -gMSA accounts are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider a gMSA's scope of access as you look at its overall security posture. --Potential security issues and mitigations for using gMSAs are shown in the following table: +gMSAs are more secure than standard user accounts, which require ongoing password management. However, consider gMSA scope of access in relation to security posture. Potential security issues and mitigations for using gMSAs are shown in the following table: | Security issue| Mitigation | | - | - |-| gMSA is a member of privileged groups. | <li>Review your group memberships. To do so, you create a PowerShell script to enumerate all group memberships. You can then filter a resultant CSV file by the names of your gMSA files.<li>Remove the gMSA from privileged groups.<li>Grant the gMSA only the rights and permissions it requires to run its service (consult with your service vendor). -| gMSA has read/write access to sensitive resources. | <li>Audit access to sensitive resources.<li>Archive audit logs to a SIEM, such as Azure Log Analytics or Microsoft Sentinel, for analysis.<li>Remove unnecessary resource permissions if you detect an undesirable level of access. | -| | | +| gMSA is a member of privileged groups | <li>Review your group memberships. Create a PowerShell script to enumerate group memberships. Filter the resultant CSV file by gMSA file names.<li>Remove the gMSA from privileged groups.<li>Grant the gMSA rights and permissions it requires to run its service. See your service vendor. +| gMSA has read/write access to sensitive resources | <li>Audit access to sensitive resources.<li>Archive audit logs to a SIEM, such as Azure Log Analytics or Microsoft Sentinel, for analysis.<li>Remove unnecessary resource permissions if there's an unnecessary access level. | ## Find gMSAs -Your organization might already have created gMSAs. To retrieve these accounts, run the following PowerShell cmdlets: +Your organization might have gMSAs. To retrieve these accounts, run the following PowerShell cmdlets: ```powershell Get-ADServiceAccount Test-ADServiceAccount Uninstall-ADServiceAccount ``` --To work effectively, gMSAs must be in the Managed Service Accounts AD container. -+### Managed Service Accounts container - +To work effectively, gMSAs must be in the Managed Service Accounts container. + + -To find service MSAs that might not be in the list, run the following commands: +To find service MSAs not in the list, run the following commands: ```powershell Get-ADServiceAccount -Filter * -# This PowerShell cmdlet will return all managed service accounts (both gMSAs and sMSAs). An administrator can differentiate between the two by examining the ObjectClass attribute on returned accounts. +# This PowerShell cmdlet returns managed service accounts (gMSAs and sMSAs). Differentiate by examining the ObjectClass attribute on returned accounts. # For gMSA accounts, ObjectClass = msDS-GroupManagedServiceAccount Get-ADServiceAccount ΓÇôFilter * | where-object {$_.ObjectClass -eq "msDS-GroupM ## Manage gMSAs -To manage gMSA accounts, you can use the following Active Directory PowerShell cmdlets: +To manage gMSAs, use the following Active Directory PowerShell cmdlets: `Get-ADServiceAccount` To manage gMSA accounts, you can use the following Active Directory PowerShell c `Uninstall-ADServiceAccount` > [!NOTE]-> Beginning with Windows Server 2012, the *-ADServiceAccount cmdlets work with gMSAs by default. For more information about using the preceding cmdlets, see [Get started with group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). +> In Windows Server 2012, and later versions, the *-ADServiceAccount cmdlets work with gMSAs. Learn more: [Get started with group managed service accounts](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). ## Move to a gMSA-gMSA accounts are the most secure type of service account for on-premises needs. If you can move to one, you should. Additionally, consider moving your services to Azure and your service accounts to Azure Active Directory. To move to a gMSA account, do the following: --1. Ensure that the [Key Distribution Service (KDS) root key](/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key) is deployed in the forest. This is a one-time operation. -1. [Create a new gMSA](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). +gMSAs are a secure service account type for on-premises. It's recommended you use gMSAs, if possible. In addition, consider moving your services to Azure and your service accounts to Azure Active Directory. + +To move to a gMSA: -1. Install the new gMSA on each host that runs the service. +1. Ensure the [Key Distribution Service (KDS) root key](/windows-server/security/group-managed-service-accounts/create-the-key-distribution-services-kds-root-key) is deployed in the forest. This is a one-time operation. +2. [Create a new gMSA](/windows-server/security/group-managed-service-accounts/getting-started-with-group-managed-service-accounts). +3. Install the new gMSA on hosts that run the service. + > [!NOTE] - > For more information about creating and installing a gMSA on a host, prior to configuring your service to use the gMSA, see [Get started with group managed service accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)). --1. Change your service identity to gMSA, and specify a blank password. --1. Validate that your service is working under the new gMSA identity. --1. Delete the old service account identity. + > Before configuring your service to use the gMSA, see [Get started with group managed service accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj128431(v=ws.11)). - +4. Change your service identity to gMSA. +5. Specify a blank password. +6. Validate your service is working under the new gMSA identity. +7. Delete the old service account identity. ## Next steps To learn more about securing service accounts, see the following articles: * [Introduction to on-premises service accounts](service-accounts-on-premises.md) * [Secure standalone managed service accounts](service-accounts-standalone-managed.md) -* [Secure computer accounts](service-accounts-computer.md) -* [Secure user accounts](service-accounts-user-on-premises.md) +* [Secure computer accounts with Active Directory](service-accounts-computer.md) +* [Secure user-based service accounts in Active Directory](service-accounts-user-on-premises.md) * [Govern on-premises service accounts](service-accounts-govern-on-premises.md) |
active-directory | Whats Deprecated Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-deprecated-azure-ad.md | Use the following table to learn about changes including deprecations, retiremen > [!NOTE] > Dates and times are United States Pacific Standard Time, and are subject to change. -|Functionality, feature, or service|Change|New tenant change date |Current tenant change date| -||||| -|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|Feb 27, 2023|Feb 27, 2023| -|Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023|Mar 1, 2023| -|[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|Nov 1, 2022|Mar 31, 2023| -|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|Jun 30, 2023| -|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023|Jun 30, 2023| -|[Azure AD PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023|Jun 30, 2023| -|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Sep 30, 2024|Sep 30, 2024| +|Functionality, feature, or service|Change|Change date | +|||:| +|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|Feb 27, 2023| +|Azure AD DS [virtual network deployments](../../active-directory-domain-services/migrate-from-classic-vnet.md)|Retirement|Mar 1, 2023| +|[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023| +|[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Jun 30, 2023| +|[Azure AD Graph API](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023| +|[Azure AD PowerShell and MSOnline PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Deprecation|Jun 30, 2023| +|[Azure AD MFA Server](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-september-2022-train/ba-p/2967454)|Retirement|Sep 30, 2024| ++\* The legacy license management API and PowerShell cmdlets will not work for **new tenants** created after Nov 1, 2022. > [!IMPORTANT] Use the definitions in this section help clarify the state, availability, and su ### Terminology * **End-of-life** - engineering investments have ended, and the feature is unavailable to any customer-* **Current tenant change date** - the change date goes into effect for tenants created before the new tenant change date -* **New tenant change date** - the change date goes into effect for tenants created after the change date ## Next steps [What's new in Azure Active Directory?](../../active-directory/fundamentals/whats-new.md) |
active-directory | Manage Roles Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md | Follow these steps to assign Azure AD roles using the Azure portal. Your experie  -1. Select a role to see its assignments. +1. Find the role you need. You can use the search box or **Add filters** to filter the roles. - To help you find the role you need, use **Add filters** to filter the roles. +1. Select the role name to open the role. Don't add a check mark next to the role. ++  1. Select **Add assignments** and then select the users you want to assign to this role. Follow these steps to assign roles using the [Roles and administrators](https://  -1. Select a role to see its eligible, active, and expired role assignments. +1. Find the role you need. You can use the search box or **Add filters** to filter the roles. ++1. Select the role name to open the role and see its eligible, active, and expired role assignments. Don't add a check mark next to the role. - To help you find the role you need, use **Add filters** to filter the roles. +  1. Select **Add assignments**. |
aks | Quick Kubernetes Deploy Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md | Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster by using Bicep description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS)- Last updated 11/01/2022 |
aks | Quick Kubernetes Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md | Title: 'Quickstart: Deploy an AKS cluster by using Azure CLI' description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure CLI.- Last updated 11/01/2022 |
aks | Quick Kubernetes Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md | Title: 'Quickstart: Deploy an AKS cluster by using the Azure portal' description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal.- Last updated 11/01/2022 |
aks | Quick Kubernetes Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md | Title: 'Quickstart: Deploy an AKS cluster by using PowerShell' description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.- Last updated 11/01/2022 |
aks | Quick Kubernetes Deploy Rm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md | Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS)- Last updated 11/01/2022 |
aks | Quick Windows Container Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md | Title: Create a Windows Server container on an AKS cluster by using Azure CLI description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure CLI.- Last updated 11/01/2022 |
aks | Quick Windows Container Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md | Title: Create a Windows Server container on an AKS cluster by using PowerShell description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell.- Last updated 11/01/2022 |
aks | Tutorial Kubernetes Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md | Title: Tutorial - Use a workload identity with an application on Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity.- Last updated 01/11/2023 |
aks | Tutorial Kubernetes App Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-app-update.md | To correctly use the updated image, tag the *azure-vote-front* image with the lo Use [docker tag][docker-tag] to tag the image. Replace `<acrLoginServer>` with your ACR login server name or public registry hostname, and update the image version to *:v2* as follows: ```console-docker tag mcr.microsoft.com/azuredocs/azure-vote-front:v1 <acrLoginServer>/azure-vote-front:v2 +docker tag /azure-vote-front:v1 /azure-vote-front:v2 ``` Now use [docker push][docker-push] to upload the image to your registry. Replace `<acrLoginServer>` with your ACR login server name. |
api-management | Validate Jwt Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md | The `validate-jwt` policy enforces existence and validity of a supported JSON we * **HS256** - the key must be provided inline within the policy in the Base64-encoded form. * **RS256** - the key may be provided either via an OpenID configuration endpoint, or by providing the ID of an uploaded certificate (in PFX format) that contains the public key, or the modulus-exponent pair of the public key. * The policy supports tokens encrypted with symmetric keys using the following encryption algorithms: A128CBC-HS256, A192CBC-HS384, A256CBC-HS512.+* To configure the policy with one or more OpenID configuration endpoints for use with a self-hosted gateway, the OpenID configuration endpoints URLs must also be reachable by the cloud gateway. * You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-jwt` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control. |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | Title: App Service Environment networking description: App Service Environment networking details Previously updated : 09/27/2022 Last updated : 02/06/2023 The size of the subnet can affect the scaling limits of the App Service plan ins >[!NOTE] > Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If your App Service Environment has for example 2 Windows Container App Service plans each with 25 instances and each with 5 apps running, you will need 300 IP addresses and additional addresses to support horizontal (up/down) scale. -If you use a smaller subnet, be aware of the following: +If you use a smaller subnet, be aware of the following limitations: - Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses). As you scale your App Service plans in your App Service Environment, you'll use ## Ports and network restrictions -For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet. +For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet. It's a good idea to configure the following inbound NSG rule: The normal app access ports inbound are as follows: You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies. -Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, this could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you are using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you will need to allow `oryx-cdn.microsoft.io:443`. +Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, dependencies could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`. You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so allows you to expose specific apps on that App Service Environment. Your application will use one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet. > [!NOTE]-> Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md). +> Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting, see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md). ## Private endpoint In order to enable Private Endpoints for apps hosted in your App Service Environment, you must first enable this feature at the App Service Environment level. -You can activate it through the Azure portal: in the App Service Environment configuration pane turn **on** the setting `Allow new private endpoints`. +You can activate it through the Azure portal. In the App Service Environment configuration pane, turn **on** the setting `Allow new private endpoints`. Alternatively the following CLI can enable it: ```azurecli-interactive For more information about Private Endpoint and Web App, see [Azure Web App Priv ## DNS -The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. +The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you'll need to use their respective domain suffix. ### DNS configuration to your App Service Environment To configure DNS in Azure DNS private zones: 1. Create an A record in that zone that points * to the inbound IP address. 1. Create an A record in that zone that points *.scm to the inbound IP address. -In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (simply replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *<appname>.scm.<asename>.appserviceenvironment.net*. +In addition to the default domain provided when an app is created, you can also add a custom domain to your app. You can set a custom domain name without any validation on your apps. If you're using custom domains, you need to ensure they have DNS records configured. You can follow the preceding guidance to configure DNS zones and records for a custom domain name (replace the default domain name with the custom domain name). The custom domain name works for app requests, but doesn't work for the `scm` site. The `scm` site is only available at *<appname>.scm.<asename>.appserviceenvironment.net*. ### DNS configuration for FTP access For FTP access to Internal Load balancer (ILB) App Service Environment v3 specif 1. Create an Azure DNS private zone named `ftp.appserviceenvironment.net`. 1. Create an A record in that zone that points `<App Service Environment-name>` to the inbound IP address. -In addition to setting up DNS, you also need to enable it in the [App Service Environment configuration](./configure-network-settings.md#ftp-access) as well as at the [app level](../deploy-ftp.md?tabs=cli#enforce-ftps). +In addition to setting up DNS, you also need to enable it in the [App Service Environment configuration](./configure-network-settings.md#ftp-access) and at the [app level](../deploy-ftp.md?tabs=cli#enforce-ftps). ### DNS configuration from your App Service Environment -The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server. +The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server. ## More resources |
app-service | Troubleshoot Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md | In your application code, you use the usual logging facilities to send log messa ## Stream logs -Before you stream logs in real time, enable the log type that you want. Any information written to files ending in .txt, .log, or .htm that are stored in the */LogFiles* directory (d:/home/logfiles) is streamed by App Service. +Before you stream logs in real time, enable the log type that you want. Any information written to the conosle output or files ending in .txt, .log, or .htm that are stored in the */home/LogFiles* directory (D:\home\LogFiles) is streamed by App Service. > [!NOTE] > Some types of logging buffer write to the log file, which can result in out of order events in the stream. For example, an application log entry that occurs when a user visits a page may be displayed in the stream before the corresponding HTTP log entry for the page request. |
automation | Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md | Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 09/16/2021 Last updated : 02/06/2022 +## Start-AzAutomationRunbook fails with "runbookName does not match expected pattern" error message ++### Issue +When you run `Start-AzAutomationRunbook` to start specific runbooks: ++```powershell +start-azautomationRunbook -Name "Test_2" -AutomationAccountName "AutomationParent" -ResourceGroupName "AutomationAccount" +``` +It fails with the following error: + +`Start-AzAutomationRunbook: "runbookname" does not match expected pattern '^[a-zA-Z]*-*[a-zA-Z0-9]*$'` + +### Cause ++Code that was introduced in [1.9.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.9.0) of the Az.Automation module verifies the names of the runbooks to start and incorrectly flags runbooks with multiple "-" characters or with an "_" character in the name as invalid. ++### Workaround +We recommend that you revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module. ++### Resolution +Currently, we are working to deploy a fix to address this issue. + ## Diagnose runbook issues When you receive errors during runbook execution in Azure Automation, you can use the following steps to help diagnose the issues: |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | This QuickStart shows you how to connect your SCVMM management server to Azure A 1. Provide a name for **Custom location**. This is the name that you'll see when you deploy virtual machines. Name it for the datacenter or the physical location of your datacenter. For example: *contoso-nyc-dc.* - >[!Note] - >If you are using an existing resource bridge created for a different provider (HCI/VMware), ensure that you create a separate custom location for each provider. - 1. Leave the option **Use the same subscription and resource group as your resource bridge** selected. 1. Provide a name for your **SCVMM management server instance** in Azure. For example: *contoso-nyc-scvmm.* 1. Select **Next: Download and run script**. This QuickStart shows you how to connect your SCVMM management server to Azure A 1. Based on the operating system of your workstation, download the PowerShell or Bash script and copy it to the workstation. 1. To see the status of your onboarding after you run the script on your workstation, select **Next:Verification**. The onboarding isn't affected when you close this page. -## Run the script --Use the following instructions to run the script, depending on the Operating System of the workstation. -->[!NOTE] ->Before running the script, install the latest version of Azure CLI (2.36.0 or later). --**Known issue** --We are observing intermittent extension installation issues with Azure CLI 2.42.0 version. To avoid failures, install Azure CLI 2.41.0 versions. [Download the specific version of Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli-windows?tabs=azure-cli#specific-version). - ### Windows Follow these instructions to run the script on a Windows machine. |
azure-monitor | Autoscale Common Scale Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-scale-patterns.md | In this example: + The default profile is irrelevant as there's no time that isn't covered by the other profiles. >[!Note]-> Creating a recurring profile with no end time is only supported via the portal. +> Creating a recurring profile with no end time is only supported via the portal and ARM templates. For more information on creating recurring profiles with ARM templates, see [Add a recurring profile using ARM templates](./autoscale-multiprofile.md?tabs=templates#add-a-recurring-profile-using-arm-templates). > If the end-time is not included in the CLI command, a default end-time of 23:59 will be implemented by creating a copy of the default profile with the naming convention `"name": {\"name\": \"Auto created default scale condition\", \"for\": \"<non-default profile name>\"}` :::image type="content" source="./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png" alt-text="A screenshot showing two autoscale profiles, one default and one for weekends." lightbox="./media/autoscale-common-scale-patterns/scale-differently-on-weekends.png"::: |
azure-monitor | Autoscale Multiprofile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md | See the autoscale section of the [ARM template resource definition](https://lear There is no specification in the template for end time. A profile will remain active until the next profile's start time. -## Add a recurring profile using AIM templates +## Add a recurring profile using ARM templates -The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday end just after midnight on Saturday morning. +The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday and end just after midnight on Saturday morning. Use the following command to deploy the template: ` az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json` |
azure-monitor | Migrate Splunk To Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/migrate-splunk-to-azure-monitor-logs.md | To set up a Log Analytics workspace for data collection: 1. Use [table-level configuration settings](../logs/manage-logs-tables.md) to: 1. [Define each table's log data plan](../logs/basic-logs-configure.md). - The default log data plan is Analytics, which lets you take advantage of Azure Monitor's rich monitoring and analytics capabilities. If youYou can + The default log data plan is Analytics, which lets you take advantage of Azure Monitor's rich monitoring and analytics capabilities. 1. [Set a data retention and archiving policy for specific tables](../logs/data-retention-archive.md), if you need them to be different from the workspace-level data retention and archiving policy. 1. [Modify the table schema](../logs/create-custom-table.md) based on your data model. |
azure-monitor | Vminsights Enable Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-hybrid.md | You can download the Dependency agent from these locations: | File | OS | Version | SHA-256 | |:--|:--|:--|:--|-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.15.22060 | 39427C875E08BF13E1FD3B78E28C96666B722DA675FAA94D8014D8F1A42AE724 | -| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.15.22060 | 5B99CDEA77C6328BDEF448EAC9A6DEF03CE5A732C5F7C98A4D4F4FFB6220EF58 | +| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.16.22650 | BE537D4396625ADD93B8C1D5AF098AE9D9472D8A20B2682B32920C5517F1C041 | +| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.16.22650 | FF86D821BA845833C9FE5F6D5C8A5F7A60617D3AD7D84C75143F3E244ABAAB74 | ## Install the Dependency agent on Windows |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408) * [SAP Oracle 19c System Refresh Guide on Azure VMs using Azure NetApp Files Snapshots with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-oracle-19c-system-refresh-guide-on-azure-vms-using-azure/ba-p/3708172) * [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload using Azure NetApp Files](../virtual-machines/workloads/sap/dbms_guide_ibm.md#using-azure-netapp-files)+* [DB2 Installation Guide on Azure NetApp Files](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/db2-installation-guide-on-anf/ba-p/3709437) ### SAP IQ-NLS This section provides solutions for Azure platform services. * [Protecting Magento e-commerce platform in AKS against disasters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-magento-e-commerce-platform-in-aks-against-disasters/ba-p/3285525) * [Protecting applications on private Azure Kubernetes Service clusters with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/protecting-applications-on-private-azure-kubernetes-service/ba-p/3289422) * [Providing Disaster Recovery to CloudBees-Jenkins in AKS with Astra Control Service](https://techcommunity.microsoft.com/t5/azure-architecture-blog/providing-disaster-recovery-to-cloudbees-jenkins-in-aks-with/ba-p/3553412)+* [Disaster protection for JFrog Artifactory in AKS with Astra Control Service and Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/disaster-protection-for-jfrog-artifactory-in-aks-with-astra/ba-p/3701501) * [Develop and test easily on AKS with NetApp® Astra Control Service® and Azure NetApp Files](https://techcommunity.microsoft.com/t5/azure-architecture-blog/develop-and-test-easily-on-aks-with-netapp-astra-control-service/ba-p/3604225) ### Azure Machine Learning |
azure-resource-manager | Bicep Functions Lambda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md | Last updated 09/20/2022 # Lambda functions for Bicep -This article describes the lambda functions to use in Bicep. [Lambda expressions (or lambda functions)](https://learn.microsoft.com/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are resticted to a single line of code. In Bicep, lambda expression is in this format: +This article describes the lambda functions to use in Bicep. [Lambda expressions (or lambda functions)](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code. In Bicep, lambda expression is in this format: ```bicep <lambda variable> => <expression> |
azure-resource-manager | Template Functions Lambda | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-lambda.md | + + Title: Template functions - lambda +description: Describes the lambda functions to use in an Azure Resource Manager template (ARM template) +++ Last updated : 02/06/2023+++# Lambda functions for ARM templates ++This article describes the lambda functions to use in ARM templates. [Lambda expressions (or lambda functions)](/dotnet/csharp/language-reference/operators/lambda-expressions) are essentially blocks of code that can be passed as an argument. They can take multiple parameters, but are restricted to a single line of code. In ARM templates, lambda expression is in this format: ++```json +<lambda variable> => <expression> +``` ++> [!TIP] +> We recommend [Bicep](../bicep/overview.md) because it offers the same capabilities as ARM templates and the syntax is easier to use. To learn more, see [deployment](../bicep/bicep-functions-deployment.md) functions. ++## Limitations ++ARM template lambda function has these limitations: ++- Lambda expression can only be specified directly as function arguments in these functions: [`filter()`](#filter), [`map()`](#map), [`reduce()`](#reduce), and [`sort()`](#sort). +- Using lambda variables (the temporary variables used in the lambda expressions) inside resource or module array access isn't currently supported. +- Using lambda variables inside the [`listKeys`](./template-functions-resource.md#list) function isn't currently supported. +- Using lambda variables inside the [reference](./template-functions-resource.md#reference) function isn't currently supported. ++## filter ++`filter(inputArray, lambda expression)` ++Filters an array with a custom filtering function. ++In Bicep, use the [filter](../bicep/bicep-functions-lambda.md#filter) function. ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to filter.| +| lambda expression |Yes |expression |The lambda expression applied to each input array element. If false, the item will be filtered out of the output array.| ++### Return value ++An array. ++### Examples ++The following examples show how to use the filter function. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "variables": { + "dogs": [ + { + "name": "Evie", + "age": 5, + "interests": [ + "Ball", + "Frisbee" + ] + }, + { + "name": "Casper", + "age": 3, + "interests": [ + "Other dogs" + ] + }, + { + "name": "Indy", + "age": 2, + "interests": [ + "Butter" + ] + }, + { + "name": "Kira", + "age": 8, + "interests": [ + "Rubs" + ] + } + ] + }, + "resources": {}, + "outputs": { + "oldDogs": { + "type": "array", + "value": "[filter(variables('dogs'), lambda('dog', greaterOrEquals(lambdaVariables('dog').age, 5)))]" + } + } +} +``` ++The output from the preceding example shows the dogs that are five or older: ++| Name | Type | Value | +| - | - | -- | +| oldDogs | Array | [{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] | ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "variables": { + "copy": [ + { + "name": "itemForLoop", + "count": "[length(range(0, 10))]", + "input": "[range(0, 10)[copyIndex('itemForLoop')]]" + } + ] + }, + "resources": {}, + "outputs": { + "filteredLoop": { + "type": "array", + "value": "[filter(variables('itemForLoop'), lambda('i', greater(lambdaVariables('i'), 5)))]" + }, + "isEven": { + "type": "array", + "value": "[filter(range(0, 10), lambda('i', equals(0, mod(lambdaVariables('i'), 2))))]" + } + } +} +``` ++The output from the preceding example: ++| Name | Type | Value | +| - | - | -- | +| filteredLoop | Array | [6, 7, 8, 9] | +| isEven | Array | [0, 2, 4, 6, 8] | ++**filterdLoop** shows the numbers in an array that are greater than 5; and **isEven** shows the even numbers in the array. ++## map ++`map(inputArray, lambda expression)` ++Applies a custom mapping function to each element of an array. ++In Bicep, use the [map](../bicep/bicep-functions-lambda.md#map) function. ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to map.| +| lambda expression |Yes |expression |The lambda expression applied to each input array element, in order to generate the output array.| ++### Return value ++An array. ++### Example ++The following example shows how to use the map function. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "variables": { + "dogs": [ + { + "name": "Evie", + "age": 5, + "interests": [ + "Ball", + "Frisbee" + ] + }, + { + "name": "Casper", + "age": 3, + "interests": [ + "Other dogs" + ] + }, + { + "name": "Indy", + "age": 2, + "interests": [ + "Butter" + ] + }, + { + "name": "Kira", + "age": 8, + "interests": [ + "Rubs" + ] + } + ] + }, + "resources": {}, + "outputs": { + "dogNames": { + "type": "array", + "value": "[map(variables('dogs'), lambda('dog', lambdaVariables('dog').name))]" + }, + "sayHi": { + "type": "array", + "value": "[map(variables('dogs'), lambda('dog', format('Hello {0}!', lambdaVariables('dog').name)))]" + }, + "mapObject": { + "type": "array", + "value": "[map(range(0, length(variables('dogs'))), lambda('i', createObject('i', lambdaVariables('i'), 'dog', variables('dogs')[lambdaVariables('i')].name, 'greeting', format('Ahoy, {0}!', variables('dogs')[lambdaVariables('i')].name))))]" + } + } +} +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| dogNames | Array | ["Evie","Casper","Indy","Kira"] | +| sayHi | Array | ["Hello Evie!","Hello Casper!","Hello Indy!","Hello Kira!"] | +| mapObject | Array | [{"i":0,"dog":"Evie","greeting":"Ahoy, Evie!"},{"i":1,"dog":"Casper","greeting":"Ahoy, Casper!"},{"i":2,"dog":"Indy","greeting":"Ahoy, Indy!"},{"i":3,"dog":"Kira","greeting":"Ahoy, Kira!"}] | ++**dogNames** shows the dog names from the array of objects; **sayHi** concatenates "Hello" and each of the dog names; and **mapObject** creates another array of objects. ++## reduce ++`reduce(inputArray, initialValue, lambda expression)` ++Reduces an array with a custom reduce function. ++In Bicep, use the [reduce](../bicep/bicep-functions-lambda.md#reduce) function. ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to reduce.| +| initialValue |No |any |Initial value.| +| lambda expression |Yes |expression |The lambda expression used to aggregate the current value and the next value.| ++### Return value ++Any. ++### Example ++The following examples show how to use the reduce function. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "variables": { + "dogs": [ + { + "name": "Evie", + "age": 5, + "interests": [ + "Ball", + "Frisbee" + ] + }, + { + "name": "Casper", + "age": 3, + "interests": [ + "Other dogs" + ] + }, + { + "name": "Indy", + "age": 2, + "interests": [ + "Butter" + ] + }, + { + "name": "Kira", + "age": 8, + "interests": [ + "Rubs" + ] + } + ], + "ages": "[map(variables('dogs'), lambda('dog', lambdaVariables('dog').age))]" + }, + "resources": {}, + "outputs": { + "totalAge": { + "type": "int", + "value": "[reduce(variables('ages'), 0, lambda('cur', 'next', add(lambdaVariables('cur'), lambdaVariables('next'))))]" + }, + "totalAgeAdd1": { + "type": "int", + "value": "[reduce(variables('ages'), 1, lambda('cur', 'next', add(lambdaVariables('cur'), lambdaVariables('next'))))]" + } + } +} +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| totalAge | int | 18 | +| totalAgeAdd1 | int | 19 | ++**totalAge** sums the ages of the dogs; **totalAgeAdd1** has an initial value of 1, and adds all the dog ages to the initial values. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "resources": {}, + "outputs": { + "reduceObjectUnion": { + "type": "object", + "value": "[reduce(createArray(createObject('foo', 123), createObject('bar', 456), createObject('baz', 789)), createObject(), lambda('cur', 'next', union(lambdaVariables('cur'), lambdaVariables('next'))))]" + } + } +} +``` ++The output from the preceding example is: ++| Name | Type | Value | +| - | - | -- | +| reduceObjectUnion | object | {"foo":123,"bar":456,"baz":789} | ++The [union](./template-functions-object.md#union) function returns a single object with all elements from the parameters. The function call unionizes the key value pairs of the objects into a new object. ++## sort ++`sort(inputArray, lambda expression)` ++Sorts an array with a custom sort function. ++In Bicep, use the [sort](../bicep/bicep-functions-lambda.md#sort) function. ++### Parameters ++| Parameter | Required | Type | Description | +|: |: |: |: | +| inputArray |Yes |array |The array to sort.| +| lambda expression |Yes |expression |The lambda expression used to compare two array elements for ordering. If true, the second element will be ordered after the first in the output array.| ++### Return value ++An array. ++### Example ++The following example shows how to use the sort function. ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "variables": { + "dogs": [ + { + "name": "Evie", + "age": 5, + "interests": [ + "Ball", + "Frisbee" + ] + }, + { + "name": "Casper", + "age": 3, + "interests": [ + "Other dogs" + ] + }, + { + "name": "Indy", + "age": 2, + "interests": [ + "Butter" + ] + }, + { + "name": "Kira", + "age": 8, + "interests": [ + "Rubs" + ] + } + ] + }, + "resources": {}, + "outputs": { + "dogsByAge": { + "type": "array", + "value": "[sort(variables('dogs'), lambda('a', 'b', less(lambdaVariables('a').age, lambdaVariables('b').age)))]" + } + } +} +``` ++The output from the preceding example sorts the dog objects from the youngest to the oldest: ++| Name | Type | Value | +| - | - | -- | +| dogsByAge | Array | [{"name":"Indy","age":2,"interests":["Butter"]},{"name":"Casper","age":3,"interests":["Other dogs"]},{"name":"Evie","age":5,"interests":["Ball","Frisbee"]},{"name":"Kira","age":8,"interests":["Rubs"]}] | ++## Next steps ++- See [Template functions - arrays](./template-functions-array.md) for additional array related template functions. |
azure-vmware | Attach Azure Netapp Files To Azure Vmware Solution Hosts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md | There are some important best practices to follow for optimal performance of NFS - For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). - Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level. - Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). +- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). Open a support request and ask that the NetApp account be pinned to the availability zone where AVS is deployed. > [!IMPORTANT] >Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless. |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | You may use a private DNS zone ending with one of the names listed above (ex: pr Azure Bastion isn't supported with Azure Private DNS Zones in national clouds. -### <a name="dns"></a>Does Azure Bastion support Private Link?" +### <a name="dns"></a>Does Azure Bastion support Private Link? No, Azure Bastion doesn't currently support private link. |
cognitive-services | Multivariate Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/concepts/multivariate-architecture.md | Title: Predicative maintenance architecture for using the Anomaly Detector Multivariate API + Title: Predictive maintenance architecture for using the Anomaly Detector Multivariate API description: Reference architecture for using the Anomaly Detector Multivariate APIs to apply anomaly detection to your time series data for predictive maintenance. In the above architecture, streaming events coming from sensor data will be stor ## Next steps - [Quickstarts](../quickstarts/client-libraries-multivariate.md).-- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.+- [Best Practices](../concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs. |
cognitive-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md | You can get pronunciation assessment scores for: > [!NOTE] > The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. > -> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). +> Usage of pronunciation assessment costs the same as standard Speech to Text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. > > For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | Pronunciation assessment provides various assessment results in different granul This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md). > [!NOTE]-> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). +> Usage of pronunciation assessment costs the same as standard Speech to Text pay-as-you-go [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). Pronunciation assessment doesn't yet support commitment tier pricing. > > For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service). |
communication-services | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/security.md | Title: Azure AD API permissions for communication as Teams external user + Title: Security for communication as Teams external user -description: This article describes Azure AD API permissions for communication as a Teams external user with Azure Communication Services. +description: This article describes security for communication as a Teams external user with Azure Communication Services. Microsoft Teams handles security using a combination of technologies and process Additionally, Microsoft Teams provides several policies and tenant configurations to control Teams external users joining and in-meeting experience. Teams administrators can use settings in the Microsoft Teams admin center or PowerShell to control whether Teams external users can join Teams meetings, bypass lobby, start a meeting, participate in chat, or default role assignment. You can learn more about the [policies here](./teams-administration.md). +## Microsoft Purview ++Microsoft Purview provides robust data security features to protect sensitive information. One of the key features of Purview is [data loss prevention (DLP)](/microsoft-365/compliance/dlp-microsoft-teams), which helps organizations to prevent accidental or unauthorized sharing of sensitive data. Developers can use [Communication Services UI library](../../ui-library/ui-library-overview.md) or follow [how-to guide](../../../how-tos/chat-sdk/data-loss-prevention.md) to support data loss prevention in Teams meetings. In addition, Purview offers [Customer Key](/microsoft-365/compliance/customer-key-overview), which allows customers to manage the encryption keys used to protect their data fully. Chat messages sent by Teams external users are encrypted at rest with the Customer Key provided in Purview. These features help customers meet compliance requirements. + ## Azure Communication Services Azure Communication Services handles security by implementing various security measures to prevent and mitigate common security threats. These measures include data encryption in transit and at rest, secure real-time communication through Microsoft's global network, and authentication mechanisms to verify the identity of users. The security framework for Azure Communication Services is based on industry standards and best practices. Azure also undergoes regular security assessments and audits to ensure that the platform meets industry standards for security and privacy. Additionally, Azure Communication Services integrates with other Azure security services, such as Azure Active Directory, to provide customers with a comprehensive security solution. Customers can also control access to the services and manage their security settings through the Azure portal. You can learn here more about [Azure security baseline](/security/benchmark/azure/baselines/azure-communication-services-security-baseline?toc=/azure/communication-services/toc.json), about security of [call flows](../../call-flows.md) and [call flow topologies](../../detailed-call-flows.md). |
communication-services | Emergency Calling Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/emergency-calling-concept.md | -# Emergency Calling concept +# Emergency Calling concepts [!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)] ## Overview Azure Communication Calling SDK can be used to add Enhanced 911 dialing and Public Safety Answering Point (PSAP) call-back support to your applications in the United States (US) & Puerto Rico. The capability to dial 911 and receive a call-back may be a requirement for your application. Verify the E911 requirements with your legal counsel. -Calls to 911 are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when 911 calls from the US & Puerto Rico are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there is a call-back from the PSAP, we route the call directly to the originating 911 caller. The caller can accept incoming PSAP call even if inbound calling is disabled. +Calls to 911 are routed over the Microsoft network. Microsoft assigns a temporary phone number as the Call Line Identity (CLI) when 911 calls from the US & Puerto Rico are placed. Microsoft temporarily maintains a mapping of the phone number to the caller's identity. If there's a call-back from the PSAP, we route the call directly to the originating 911 caller. The caller can accept incoming PSAP call even if inbound calling is disabled. The service is available for Microsoft phone numbers. It requires that the Azure resource from where the 911 call originates has a Microsoft-issued phone number enabled with outbound dialing (also referred to as ΓÇÿmake calls'). Azure Communication Services direct routing is currently in public preview and n ## Enabling Emergency calling -Emergency dialing is automatically enabled for all users of the Azure Communication Client Calling SDK with an acquired Microsoft telephone number that is enabled for outbound dialing in the Azure resource. To use E911 with Microsoft phone numbers, follow the below steps: +Emergency dialing is automatically enabled for all users of the Azure Communication Client Calling SDK with an acquired Microsoft telephone number that is enabled for outbound dialing in the Azure resource. To use E911 with Microsoft phone numbers, follow the steps: 1. Acquire a Microsoft phone number in the Azure resource of the client application (at least one of the numbers in the Azure resource must have the ability to ΓÇÿMake CallsΓÇÖ) Emergency dialing is automatically enabled for all users of the Azure Communicat 1. Microsoft supports a country US and Puerto Rico ISO codes for 911 dialing - 1. If the country code is not provided to the SDK, the IP address is used to determine the country of the caller + 1. If the country code isn't provided to the SDK, the IP address is used to determine the country of the caller - 1. If the IP address cannot provide reliable geo-location, for example the user is on a Virtual Private Network, it is required to set the ISO Code of the calling country using the API in the Azure Communication Services Calling SDK. See example in the E911 quick start + 1. If the IP address can't provide reliable geo-location, for example the user is on a Virtual Private Network, it's required to set the ISO Code of the calling country using the API in the Azure Communication Services Calling SDK. See example in the E911 quick start - 1. If users are dialing from a US territory (for example Guam, US Virgin Islands, Northern Marianas, or American Samoa), it is required to set the ISO code to the US + 1. If users are dialing from a US territory (for example Guam, US Virgin Islands, Northern Marianas, or American Samoa), it's required to set the ISO code to the US - 1. If the caller is outside of the US and Puerto Rico, the call to 911 will not be permitted + 1. If the caller is outside of the US and Puerto Rico, the call to 911 won't be permitted -1. When testing your application dial 933 instead of 911. 933 number is enabled for testing purposes; the recorded message will confirm the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft and is not the `alternateCallerId` provided by the application +1. When testing your application dial 933 instead of 911. 933 is enabled for testing purposes; the recorded message will confirm the phone number the emergency call originates from. You should hear a temporary number assigned by Microsoft, which isn't the `alternateCallerId` provided by the application 1. Ensure your application supports [receiving an incoming call](../../how-tos/calling-sdk/manage-calls.md#receive-an-incoming-call) so call-backs from the PSAP are appropriately routed to the originator of the 911 call. To test inbound calling is working correctly, place inbound VoIP calls to the user of the Calling SDK The Emergency service is temporarily free to use for Azure Communication Service ## Emergency calling with Azure Communication Services direct routing -Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there is a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly. -There is also an option to use purchased number as a caller ID for direct routing calls, in such case if there is no voice routing rule for emergency number, the call will fall back to Microsoft network, and we will treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations). +Emergency call is a regular call from a direct routing perspective. If you want to implement emergency calling with Azure Communication Services direct routing, you need to make sure that there's a routing rule for your emergency number (911, 112, etc.). You also need to make sure that your carrier processes emergency calls properly. +There's also an option to use purchased number as a caller ID for direct routing calls. In such case, if there's no voice routing rule for emergency number, the call will fall back to Microsoft network, and we'll treat it as a regular emergency call. Learn more about [voice routing fall back](./direct-routing-provisioning.md#outbound-voice-routing-considerations). ## Next steps |
communication-services | Get Started Raw Media Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-raw-media-access.md | Title: Quickstart - Add RAW media access to your app + Title: Quickstart - Add raw media access to your app -description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app using Azure Communication Services. +description: In this quickstart, you'll learn how to add raw media access calling capabilities to your app by using Azure Communication Services. zone_pivot_groups: acs-plat-web-ios-android-windows -# QuickStart: Add raw media access to your app +# Quickstart: Add raw media access to your app ::: zone pivot="platform-windows" [!INCLUDE [Raw media with Windows](./includes/raw-medi)]+- Check out the [calling hero sample](../../samples/calling-hero-sample.md). +- Get started with the [UI Library](https://aka.ms/acsstorybook). +- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web). +- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md). |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | You'll need an onboarding partner for integrating with Microsoft Phone System. I You must ensure you've got two or more numbers that you own which are globally routable. Your onboarding team will require these numbers to configure test lines. -We strongly recommend that all operators have a support plan that includes technical support, such as a **Microsoft Unified** or **Premier** support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/). +We strongly recommend that you have a support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier). ## 1. Configure Azure Active Directory in Operator Azure tenancy |
communications-gateway | Request Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md | Last updated 01/08/2023 If you notice problems with Azure Communications Gateway or you need Microsoft to make changes, you can raise a support request (also known as a support ticket). This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). -Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan. We recommend you have at least a **Microsoft Unified** or **Premier** support plan. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/). +Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier). ## Pre-requisites |
container-apps | Application Lifecycle Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md | As a container app is updated with a [revision scope-change](revisions.md#revisi ### Zero downtime deployment -In single revision mode, Container Apps automatically ensures your app does not experience downtime when creating new a revision. The existing active revision is not deactivated until the new revision is ready. If ingress is enabled, the existing revision will continue to receive 100% of the traffic until the new revision is ready. +In single revision mode, Container Apps automatically ensures your app does not experience downtime when creating a new revision. The existing active revision is not deactivated until the new revision is ready. If ingress is enabled, the existing revision will continue to receive 100% of the traffic until the new revision is ready. > [!NOTE] > A new revision is considered ready when one of its replicas starts and becomes ready. A replica is ready when all of its containers start and pass their [startup and readiness probes](./health-probes.md). |
cosmos-db | Resources Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-pricing.md | Last updated 09/26/2022 [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)] For the most up-to-date general pricing information, see the service-[pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/). +[pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/postgresql/). To see the cost for the configuration you want, the-[Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) +[Azure portal](https://portal.azure.com/#create/Microsoft.DocumentDB) shows the monthly cost on the **Configure** tab based on the options you select. If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the Azure Cosmos DB for PostgreSQL now helps you save money by prepaying for compute You don't need to assign the reservation to specific clusters. An already running cluster or ones that are newly deployed automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one year or three years. As soon as you buy a reservation, the Azure Cosmos DB for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you-go rates. -A reservation doesn't cover software, networking, or storage charges associated with the clusters. At the end of the reservation term, the billing benefit expires, and the clusters are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Cosmos DB for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/cosmos-db/). +A reservation doesn't cover software, networking, or storage charges associated with the clusters. At the end of the reservation term, the billing benefit expires, and the clusters are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Cosmos DB for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/cosmos-db/postgresql/). You can buy Azure Cosmos DB for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity: |
defender-for-cloud | Defender For Cloud Planning And Operations Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md | Title: Defender for Cloud Planning and Operations Guide description: This document helps you to plan before adopting Defender for Cloud and considerations regarding daily operations. Previously updated : 01/24/2023 Last updated : 02/06/2023 # Planning and operations guide Defender for Cloud enables these individuals to meet these various responsibilit **Jeff (Workload Owner)** -- Manage a cloud workload and its related resources-- Responsible for implementing and maintaining protections in accordance with company security policy+- Manage a cloud workload and its related resources. ++- Responsible for implementing and maintaining protections in accordance with company security policy. **Ellen (CISO/CIO)** -- Responsible for all aspects of security for the company-- Wants to understand the company's security posture across cloud workloads-- Needs to be informed of major attacks and risks+- Responsible for all aspects of security for the company. ++- Wants to understand the company's security posture across cloud workloads. ++- Needs to be informed of major attacks and risks. **David (IT Security)** -- Sets company security policies to ensure the appropriate protections are in place-- Monitors compliance with policies-- Generates reports for leadership or auditors+- Sets company security policies to ensure the appropriate protections are in place. ++- Monitors compliance with policies. ++- Generates reports for leadership or auditors. **Judy (Security Operations)** -- Monitors and responds to security alerts 24/7-- Escalates to Cloud Workload Owner or IT Security Analyst+- Monitors and responds to security alerts at any time. ++- Escalates to Cloud Workload Owner or IT Security Analyst. **Sam (Security Analyst)** -- Investigate attacks-- Work with Cloud Workload Owner to apply remediation+- Investigate attacks. ++- Work with Cloud Workload Owner to apply remediation. -Defender for Cloud uses [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md), which provides [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. When a user opens Defender for Cloud, they only see information related to resources they have access to. Which means the user is assigned the role of Owner, Contributor, or Reader to the subscription or resource group that a resource belongs to. In addition to these roles, there are two roles specific to Defender for Cloud: +Defender for Cloud uses [Azure role-based access control (Azure Role-based access control)](../role-based-access-control/role-assignments-portal.md), which provides [built-in roles](../role-based-access-control/built-in-roles.md) that can be assigned to users, groups, and services in Azure. When a user opens Defender for Cloud, they only see information related to resources they have access to. Which means the user is assigned the role of Owner, Contributor, or Reader to the subscription or resource group that a resource belongs to. In addition to these roles, there are two roles specific to Defender for Cloud: - **Security reader**: a user that belongs to this role is able to view only Defender for Cloud configurations, which include recommendations, alerts, policy, and health, but it won't be able to make changes. - **Security admin**: same as security reader but it can also update the security policy, dismiss recommendations and alerts. -The personas explained in the previous diagram need these Azure RBAC roles: +The personas explained in the previous diagram need these Azure Role-based access control roles: **Jeff (Workload Owner)** The personas explained in the previous diagram need these Azure RBAC roles: - Subscription Owner/Contributor required to dismiss alerts. -- Access to the workspace may be required+- Access to the workspace may be required. Some other important information to consider: Some other important information to consider: - Only subscription and resource group Owners and Contributors can apply security recommendations for a resource. -When planning access control using Azure RBAC for Defender for Cloud, make sure you understand who in your organization needs access to Defender for Cloud the tasks they'll perform. Then you can configure Azure RBAC properly. +When planning access control using Azure Role-based access control for Defender for Cloud, make sure you understand who in your organization needs access to Defender for Cloud the tasks they'll perform. Then you can configure Azure Role-based access control properly. > [!NOTE] > We recommend that you assign the least permissive role needed for users to complete their tasks. For example, users who only need to view information about the security state of resources but not take action, such as applying recommendations or editing policies, should be assigned the Reader role. Defender for Cloud uses the Log Analytics agent and the Azure Monitor Agent to c ### Agent -When automatic provisioning is enabled in the security policy, the [data collection agent](monitoring-components.md) is installed on all supported Azure VMs and any new supported VMs that are created. If the VM or computer already has the Log Analytics agent installed, Defender for Cloud uses the current installed agent. The agent's process is designed to be non-invasive and have minimal impact on VM performance. +When automatic provisioning is enabled in the security policy, the [data collection agent](monitoring-components.md) is installed on all supported Azure VMs and any new supported VMs that are created. If the VM or computer already has the Log Analytics agent installed, Defender for Cloud uses the current installed agent. The agent's process is designed to be non-invasive and have minimal effect on VM performance. If at some point you want to disable Data Collection, you can turn it off in the security policy. However, because the Log Analytics agent may be used by other Azure management and monitoring services, the agent won't be uninstalled automatically when you turn off data collection in Defender for Cloud. You can manually uninstall the agent if needed. |
defender-for-iot | Configure Sensor Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md | + + Title: Configure OT sensor settings from the Azure portal - Microsoft Defender for IoT +description: Learn how to configure settings for OT network sensors from Microsoft Defender for IoT on the Azure portal. Last updated : 12/27/2022++++# Configure OT sensor settings from the Azure portal (Public preview) ++After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, you may want to define several settings directly on the OT sensor console, such as [adding local users](manage-users-sensor.md) or [connecting to an on-premises management console](how-to-manage-individual-sensors.md#connect-a-sensor-to-the-management-console). ++Selected OT sensor settings, listed below, are also available directly from the Azure portal, and can be applied in bulk across multiple cloud-connected OT sensors at a time, or across all OT sensors in a specific site or zone. This article describes how to view and configure view OT network sensor settings from the Azure portal. ++> [!NOTE] +> The **Sensor settings** page in Defender for IoT is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> ++## Prerequisites ++To define OT sensor settings, make sure that you have the following: ++- **An Azure subscription onboarded to Defender for IoT**. If you need to, [sign up for a free account](https://azure.microsoft.com/free/) and then use the [Quickstart: Get started with Defender for IoT](getting-started.md) to onboard. ++- **Permissions**: ++ - To view settings that others have defined, sign in with a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role for the subscription. ++ - To define or update settings, sign in with [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) role. ++ For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md). ++- **One or more cloud-connected OT network sensors**. For more information, see [Onboard OT sensors to Defender for IoT](onboard-sensors.md). ++## Define a new sensor setting ++Define a new setting whenever you want to define a specific configuration for one or more OT network sensors. For example, you might want to define bandwidth caps for all OT sensors in a specific site or zone, or for a single OT sensor at a specific location in your network. ++**To define a new setting**: ++1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**. ++1. On the **Sensor settings (Preview)** page, select **+ Add**, and then use the wizard to define the following values for your setting. Select **Next** when you're done with each tab in the wizard to move to the next step. ++ |Tab name |Description | + ||| + |**Basics** | Select the subscription where you want to apply your setting, and your [setting type](#sensor-setting-reference). <br><br>Enter a meaningful name and an optional description for your setting. | + |**Setting** | Define the values for your selected setting type.<br>For details about the options available for each setting type, find your selected setting type in the [Sensor setting reference](#sensor-setting-reference) below. | + |**Apply** | Use the **Select sites**, **Select zones**, and **Select sensors** dropdown menus to define where you want to apply your setting. <br><br>**Important**: Selecting a site or zone applies the setting to all connected OT sensors, including any OT sensors added to the site or zone later on. <br>If you select to apply your settings to an entire site, you don't also need to select its zones or sensors. | + |**Review and create** | Check the selections you've made for your setting. <br><br>If your new setting replaces an existing setting, a :::image type="icon" source="media/how-to-manage-individual-sensors/warning-icon.png" border="false"::: warning is shown to indicate the existing setting.<br><br>When you're satisfied with the setting's configuration, select **Create**. | ++Your new setting is now listed on the **Sensor settings (Preview)** page under its setting type, and on the sensor details page for any related OT sensor. Sensor settings are shown as read-only on the sensor details page. For example: +++> [!TIP] +> You may want to configure exceptions to your settings for a specific OT sensor or zone. In such cases, create an extra setting for the exception. +> +> Settings override eachother in a hierarchical manner, so that if your setting is applied to a specific OT sensor, it overrides any related settings that are applied to the entire zone or site. To create an exception for an entire zone, add a setting for that zone to override any related settings applied to the entire site. +> ++## View and edit current OT sensor settings ++**To view the current settings already defined for your subscription**: ++1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)** ++ The **Sensor settings (Preview)** page shows any settings already defined for your subscriptions, listed by setting type. Expand or collapse each type to view detailed configurations. For example: ++ :::image type="content" source="media/configure-sensor-settings-portal/view-settings.png" alt-text="Screenshot of OT sensor settings on the Azure portal."::: ++1. Select a specific setting to view its exact configuration and the site, zones, or individual sensors where the setting is applied. ++1. To edit the setting's configuration, select **Edit** and then use the same wizard you used to create the setting to make the updates you need. When you're done, select **Apply** to save your changes. ++### Delete an existing OT sensor setting ++To delete an OT sensor setting altogether: ++1. On the **Sensor settings (Preview)** page, locate the setting you want to delete. +1. Select the **...** options menu at the top-right corner of the setting's card and then select **Delete**. ++For example: +++## Edit settings for disconnected OT sensors ++This procedure describes how to edit OT sensor settings if your OT sensor is currently disconnected from Azure, such as during an ongoing security incident. ++By default, if you've configured any settings from the Azure portal, all settings that are configurable from both the Azure portal and the OT sensor are set to read-only on the OT sensor itself. For example, if you've configured a VLAN from the Azure portal, then bandwidth cap, subnet, and VLAN settings are *all* set to read-only, and blocked from modifications on the OT sensor. ++If you're in a situation where the OT sensor is disconnected from Azure, and you need to modify one of these settings, you'll first need to gain write access to those settings. ++**To gain write access to blocked OT sensor settings**: ++1. On the Azure portal, in the **Sensor settings (Preview)** page, locate the setting you want to edit and open it for editing. For more information, see [View and edit current OT sensor settings](#view-and-edit-current-ot-sensor-settings) above. ++ Edit the scope of the setting so that it no longer includes the OT sensor, and any changes you make while the OT sensor is disconnected aren't overwritten when you connect it back to Azure. ++ > [!IMPORTANT] + > Settings defined on the Azure portal always override settings defined on the OT sensor. ++1. Sign into the affected OT sensor console, and select **Settings > Advanced configurations** > **Azure Remote Config**. ++1. In the code box, modify the `block_local_config` value from `1` to `0`, and select **Close**. For example: ++ :::image type="content" source="media/how-to-manage-individual-sensors/remote-config-sensor.png" alt-text="Screenshot of the Azure Remote Config option." lightbox="media/how-to-manage-individual-sensors/remote-config-sensor.png"::: ++Continue by updating the relevant setting directly on the OT network sensor. For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md). ++## Sensor setting reference ++Use the following sections to learn more about the individual OT sensor settings available from the Azure portal: ++### Bandwidth cap ++For a bandwidth cap, define the maximum bandwidth you want the sensor to use for outgoing communication from the sensor to the cloud, either in Kbps or Mbps. ++**Default**: 1500 Kbps ++**Minimum required for a stable connection to Azure** 350 Kbps. At this minimum setting, connections to the sensor console may be slower than usual. ++### Subnet ++To define your sensor's subnets do any of the following: ++- Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch. ++- Enter values in the **IP Address**, **Mask**,l and **Name** fields to add subnet details manually. Select **Add subnet** to add additional subnets as needed. ++### VLAN naming ++To define a VLAN for your OT sensor, enter the VLAN ID and a meaningful name. ++Select **Add VLAN** to add more VLANs as needed. ++## Next steps ++> [!div class="nextstepaction"] +> [Manage sensors from the Azure portal](how-to-manage-sensors-on-the-cloud.md) ++> [!div class="nextstepaction"] +> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) |
defender-for-iot | How To Investigate All Enterprise Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md | Use the **Device inventory** page from an on-premises management console to mana For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device) - > [!TIP] > Alternately, view your device inventory from a [the Azure portal](how-to-manage-device-inventory-for-organizations.md), or from an [OT sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). > ## View the device inventory -To view detected devices in the **Device Inventory** page in an on-premises management console, sign-in to your on-premises management console, and then select **Device Inventory**. +To view detected devices in the **Device Inventory** page in an on-premises management console, sign-in to your on-premises management console, and then select **Device Inventory**. For example: The following table describes the device properties shown in the **Device invent | **Last Activity** | The last activity that the device performed. | | **Discovered** | When this device was first seen in the network. | | **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. |+ ## Next steps For more information, see: - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)+- [Device data retention periods](references-data-retention.md#device-data-retention-periods). |
defender-for-iot | How To Investigate Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md | You may need to merge duplicate devices if the sensor has discovered separate ne Examples of this scenario might include a PLC with four network cards, a laptop with both WiFi and a physical network card, or a single workstation with multiple network cards. > [!NOTE]+> > - You can only merge authorized devices. > - Device merges are irreversible. If you merge devices incorrectly, you'll have to delete the merged device and wait for the sensor to rediscover both devices. > - Alternately, merge devices from the [Device map](how-to-work-with-the-sensor-device-map.md) page. When merging, you instruct the sensor to combine the device properties of two devices into one. When you do this, the Device Properties window and sensor reports will be updated with the new device property details. -For example, if you merge two devices, each with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. +For example, if you merge two devices, each with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. **To merge devices from the device inventory:** For example, if you merge two devices, each with an IP address, both IP addresse ## View inactive devices -You may want to view devices in your network that have been inactive and delete them. +You may want to view devices in your network that have been inactive and delete them. For example, devices may become inactive because of misconfigured SPAN ports, changes in network coverage, or by unplugging them from the network For more information, see: - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)+- [Device data retention periods](references-data-retention.md#device-data-retention-periods) |
defender-for-iot | How To Manage Cloud Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md | Microsoft Defender for IoT alerts enhance your network security and operations w - [Integrate with Microsoft Sentinel](iot-solution.md) to view Defender for IoT alerts in Microsoft Sentinel and manage them together with security incidents. -- If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only. +- If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). For more information, see [Alert statuses and triaging options](alerts.md#alert- In Defender for IoT in the Azure portal, select the **Alerts** page on the left, and then do one of the following: - - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar. - - On an alert details page for a learnable alert, in the **Take Action** tab, select **Learn**. + - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar. + - On an alert details page for a learnable alert, in the **Take Action** tab, select **Learn**. ## Access alert PCAP data You may want to export a selection of alerts to a CSV file for offline sharing a The file is generated, and you're prompted to save it locally. - ## Next steps > [!div class="nextstepaction"] The file is generated, and you're prompted to save it locally. > [OT monitoring alert types and descriptions](alert-engine-messages.md) > [!div class="nextstepaction"]-> [Microsoft Defender for IoT alerts](alerts.md) +> [Microsoft Defender for IoT alerts](alerts.md) ++> [!div class="nextstepaction"] +> [Data retention across Microsoft Defender for IoT](references-data-retention.md) |
defender-for-iot | How To Manage Device Inventory For Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md | On the **Device inventory page**, select **Export** :::image type="icon" source= The device inventory is exported with any filters currently applied, and you can save the file locally. - ## Delete a device If you have devices no longer in use, delete them from the device inventory so that they're no longer connected to Defender for IoT. For more information, see: - [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md) - [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md)+- [Device data retention periods](references-data-retention.md#device-data-retention-periods). |
defender-for-iot | How To Manage Individual Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md | You can configure the sensor's time and region so that all the users see the sam ## Set up backup and restore files -System backup is performed automatically at 3:00 AM daily. The data is saved on a different disk in the sensor. The default location is `/var/cyberx/backups`. +System backup is performed automatically at 3:00 AM daily. The data is saved on a different disk in the sensor. The default location is `/var/cyberx/backups`. You can automatically transfer this file to the internal network. -You can automatically transfer this file to the internal network. +For more information, see [On-premises backup file capacity](references-data-retention.md#on-premises-backup-file-capacity). > [!NOTE] > Use Defender for IoT data mining reports on an OT network sensor to retrieve for - Event timeline data - Log files -Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md). +Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md). ## Clearing sensor data |
defender-for-iot | How To Manage Sensors From The On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-from-the-on-premises-management-console.md | Use Defender for IoT data mining reports on an OT network sensor to retrieve for - Event timeline data - Log files -Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md). +Each type of data has a different retention period and maximum capacity. For more information see [Create data mining queries](how-to-create-data-mining-queries.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md). ## Define sensor backup schedules |
defender-for-iot | How To Manage Sensors On The Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-sensors-on-the-cloud.md | Use the options on the **Sites and sensor** page and a sensor details page to do | :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/icon-diagnostics.png" border="false"::: **Send diagnostic files to support** | Individual, locally managed OT sensors only. <br><br>Available from the **...** options menu. <br><br>For more information, see [Upload a diagnostics log for support](#upload-a-diagnostics-log-for-support).| | **Download SNMP MIB file** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Set up SNMP MIB monitoring](how-to-set-up-snmp-mib-monitoring.md).| | **Recover an on-premises management console password** | Available from the **Sites and sensors** toolbar **More actions** menu. <br><br>For more information, see [Manage the on-premises management console](how-to-manage-the-on-premises-management-console.md). |+| **Define OT network sensor settings** (Preview) | Define selected sensor settings for one or more cloud-connected OT network sensors. For more information, see [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md). <br><br>Other settings are also available directly from the [OT sensor console](how-to-manage-individual-sensors.md), or the [on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md).| |<a name="endpoint"></a> **Download endpoint details** (Public preview) | Available from the **Sites and sensors** toolbar **More actions** menu, for OT sensor versions 22.x only. <br><br>Download the list of endpoints that must be enabled as secure endpoints from OT network sensors. Make sure that HTTPS traffic is enabled over port 443 to the listed endpoints for your sensor to connect to Azure. Outbound allow rules are defined once for all OT sensors onboarded to the same subscription.<br><br>To enable this option, select a sensor with a supported software version, or a site with one or more sensors with supported versions. | + ## Retrieve forensics data stored on the sensor Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor: Use Azure Monitor workbooks on an OT network sensor to retrieve forensic data fr - Event timeline data - Log files -Each type of data has a different retention period and maximum capacity. For more information see [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md). +Each type of data has a different retention period and maximum capacity. For more information see [Visualize Microsoft Defender for IoT data with Azure Monitor workbooks](workbooks.md) and [Data retention across Microsoft Defender for IoT](references-data-retention.md). ## Reactivate an OT sensor If you need to open a support ticket for a locally managed sensor, upload a diag ## Next steps -[View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) +> [!div class="nextstepaction"] +> [Manage OT sensors from the sensor console](how-to-manage-individual-sensors.md) ++> [!div class="nextstepaction"] +> [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md) ++> [!div class="nextstepaction"] +> [View and manage alerts on the Defender for IoT portal (Preview)](how-to-manage-cloud-alerts.md) |
defender-for-iot | How To Track Sensor Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-track-sensor-activity.md | -The event timeline provides a chronological view of events. Use the timeline during investigations, to understand and analyze the chain of events that preceded and followed an attack or incident. +The event timeline provides a chronological view of events. Use the timeline during investigations, to understand and analyze the chain of events that preceded and followed an attack or incident. ## Before you start You need to have Administrator or Security Analyst permissions to perform the pr - **Date**: Search for events in a specific date range. 1. Select **Apply* to set the filter. 1. Select **Export** to export the event timeline to a CSV file.- + ## Add an event In addition to viewing the events that the sensor has detected, you can manually add events to the timeline. This process is useful if an external system event impacts your network, and you want to record it on the timeline. 1. Select **Create Event**.-1. In the **Create Event** dialog, specify the event type (Info, Notice, or Alert) +1. In the **Create Event** dialog, specify the event type (Info, Notice, or Alert) 1. Set a timestamp for the event, the device it should be connected with, and provide a description. 1. Select **Save** to add the event to the timeline. -- ## Next steps -For more information, see [View alerts](how-to-view-alerts.md). +For more information, see: ++- [View alerts](how-to-view-alerts.md). +- [OT event timeline retention](references-data-retention.md#ot-event-timeline-retention). |
defender-for-iot | How To View Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md | For more information, see [Alert statuses and triaging options](alerts.md#alert- Sign into your OT sensor console and select the **Alerts** page on the left, and then do one of the following: - - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar. - - On an alert details page, in the **Take Action** tab, select **Learn**. + - Select one or more learnable alerts in the grid and then select :::image type="icon" source="media/how-to-manage-sensors-on-the-cloud/learn-icon.png" border="false"::: **Learn** in the toolbar. + - On an alert details page, in the **Take Action** tab, select **Learn**. - **To mute an alert**: For more information, see [Alert statuses and triaging options](alerts.md#alert- After you unlearn or unmute an alert, alerts are re-triggered whenever the sensor senses the selected traffic combination. - ## Access alert PCAP data You might want to access raw traffic files, also known as *packet capture files* or *PCAP* files as part of your investigation. If your admin has [created custom comments](how-to-accelerate-alert-incident-res For more information, see [Accelerating OT alert workflows](alerts.md#accelerating-ot-alert-workflows). - ## Next steps > [!div class="nextstepaction"] For more information, see [Accelerating OT alert workflows](alerts.md#accelerati > [OT monitoring alert types and descriptions](alert-engine-messages.md) > [!div class="nextstepaction"]-> [Microsoft Defender for IoT alerts](alerts.md) +> [Microsoft Defender for IoT alerts](alerts.md) ++> [!div class="nextstepaction"] +> [Data retention across Microsoft Defender for IoT](references-data-retention.md) |
defender-for-iot | How To Work With Alerts On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md | You may want to export a selection of alerts to a CSV file for offline sharing a The CSV file is generated, and you're prompted to save it locally. -- ## Next steps > [!div class="nextstepaction"] The CSV file is generated, and you're prompted to save it locally. > [Forward alert information](how-to-forward-alert-information-to-partners.md) > [!div class="nextstepaction"]-> [Microsoft Defender for IoT alerts](alerts.md) +> [Microsoft Defender for IoT alerts](alerts.md) ++> [!div class="nextstepaction"] +> [Data retention across Microsoft Defender for IoT](references-data-retention.md) |
defender-for-iot | References Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md | + + Title: Data retention across Microsoft Defender for IoT +description: Learn about the data retention periods and capacities for Microsoft Defender for IoT data stored in Azure, the OT sensor, and on-premises management console. + Last updated : 01/22/2023+++# Data retention across Microsoft Defender for IoT ++Microsoft Defender for IoT stores data in the Azure portal, on OT network sensors, and on-premises management consoles. ++Each storage location affords a certain storage capacity and retention times. This article describes how much and how long each type of data is stored in each location before it's either deleted or overridden. ++## Device data retention periods ++The following table lists how long device data is stored in each Defender for IoT location. ++| Storage type | Details | +||| +| **Azure portal** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md). | +| **OT network sensor** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). | +| **On-premises management console** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md). | ++## Alert data retention ++The following table lists how long alert data is stored in each Defender for IoT location. Alert data is stored as listed, regardless of the alert's status, or whether it's been learned or muted. ++| Storage type | Details | +||| +| **Azure portal** | 90 days from the date in the **First detection** value. <br><br> For more information, see [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md). | +| **OT network sensor** | 90 days from the date in the **First detection** value.<br><br> For more information, see [View alerts on your sensor](how-to-view-alerts.md). | +| **On-premises management console** | 90 days from the date in the **First detection** value.<br><br> For more information, see [Work with alerts on the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md). | ++### OT alert PCAP data retention ++The following table lists how long PCAP data is stored in each Defender for IoT location. ++| Storage type | Details | +||| +| **Azure portal** | PCAP files are available for download from the Azure portal for as long as the OT network sensor stores them. <br><br> Once downloaded, the files are cached on the Azure portal for 48 hours. <br><br> For more information, see [Access alert PCAP data](how-to-manage-cloud-alerts.md#access-alert-pcap-data). | +| **OT network sensor** | Dependent on the sensor's storage capacity allocated for PCAP files, which is determined by its [hardware profile](ot-appliance-sizing.md): <br><br>- **C5600**: 130 GB <br>- **E1800**: 130 GB <br>- **E1000** : 78 GB<br>- **E500**: 78 GB <br>- **L500**: 7 GB <br>- **L100**: 2.5 GB<br>- **L60**: 2.5 GB <br><br> If a sensor exceeds its maximum storage capacity, the oldest PCAP file is deleted to accommodate the new one. <br><br> For more information, see [Access alert PCAP data](how-to-view-alerts.md#access-alert-pcap-data) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). | +| **On-premises management console** | PCAP files aren't stored on the on-premises management console and are only accessed from the on-premises management console via a direct link to the OT sensor. | ++The usage of available PCAP storage space depends on factors such as the number of alerts, the type of the alert, and the network bandwidth, all of which affect the size of the PCAP file. ++> [!TIP] +> To avoid being dependent on the sensor's storage capacity, use external storage to back up your PCAP data. ++## Security recommendation retention ++Defender for IoT security recommendations are stored only on the Azure portal, for 90 days from when the recommendation is first detected. ++For more information, see [Enhance security posture with security recommendations](recommendations.md). ++## OT event timeline retention ++OT event timeline data is stored on OT network sensors only, and the storage capacity differs depending on the sensor's [hardware profile](ot-appliance-sizing.md). ++The retention of event timeline data isn't limited by time. However, assuming a frequency of 500 events per day, all hardware profiles will be able to retain the events for at least **90 day**s. ++If a sensor exceeds its maximum storage size, the oldest event timeline data file is deleted to accommodate the new one. ++The following table lists the maximum number of events that can be stored for each hardware profile: ++| Hardware profile | Number of events | +||| +| **C5600** | 10M events | +| **E1800** | 10M events | +| **E1000** | 6M events | +| **E500** | 6M events | +| **L500** | 3M events | +| **L100** | 500-K events | +| **L60** | 500-K events | ++For more information, see [Track sensor activity](how-to-track-sensor-activity.md) and [Pre-configured physical appliances for OT monitoring](ot-pre-configured-appliances.md). ++## OT log file retention ++Service and processing log files are stored on the Azure portal for 30 days from their creation date. ++Other OT monitoring log files are stored only on the OT network sensor and the on-premises management console. ++For more information, see: ++- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md) +- [Download a diagnostics log for support](how-to-manage-individual-sensors.md#download-a-diagnostics-log-for-support) ++## On-premises backup file capacity ++Both the OT network sensor and the on-premises management console have automated backups running daily. ++On both the OT sensor and the on-premises management console, older backup files are overridden when the configured storage capacity has reached its maximum. ++For more information, see: ++- [Set up backup and restore files](how-to-manage-individual-sensors.md#set-up-backup-and-restore-files) +- [Configure backup settings for an OT network sensor](how-to-manage-individual-sensors.md#set-up-backup-and-restore-files) +- [Configure OT sensor backup settings from an on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md#backup-storage-for-sensors) +- [Configure backup settings for an on-premises management console](how-to-manage-the-on-premises-management-console.md#define-backup-and-restore-settings) ++### Backups on the OT network sensor ++The retention of backup files depends on the sensor's architecture, as each hardware profile has a set amount of hard disk space allocated for backup history: ++| Hardware profile | Allocated hard disk space | +||| +| **L60** | Backups are not supported | +| **L100** | Backups are not supported | +| **L500** | 20 GB | +| **E1000** | 60 GB | +| **E1800** | 100 GB | +| **C5600** | 100 GB | ++If the device doesn't have allocated hard disk space, then only the last backup will be saved on the on-premises management console. ++### Backups on the on-premises management console ++Allocated hard disk space for on-premises management console backup files is limited to 10 GB and to only 20 backups. ++If you're using an on-premises management console, each connected OT sensor also has its own, extra backup directory on the on-premises management console: ++- A single sensor backup file is limited to a maximum of 40 GB. A file exceeding that size won't be sent to the on-premises management console. +- Total hard disk space allocated to sensor backup from all sensors on the on-premises management console is 100 GB. ++## Next steps ++For more information, see: ++- [Manage individual OT network sensors](how-to-manage-individual-sensors.md) +- [Manage OT network sensors from an on-premises management console](how-to-manage-sensors-from-the-on-premises-management-console.md) +- [Manage an on-premises management console](how-to-manage-the-on-premises-management-console.md) |
defender-for-iot | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md | This version includes the following new updates and fixes: This version includes the following new updates and fixes: +- [Define and view OT sensor settings from the Azure portal](configure-sensor-settings-portal.md) - [Update your sensors from the Azure portal](update-ot-software.md#update-your-sensors) - [New naming convention for hardware profiles](ot-appliance-sizing.md) - [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md) |
defender-for-iot | Roles Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md | Roles for management actions are applied to user roles across an entire Azure su | **Manage Azure device inventory (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | | **View Azure workbooks**<br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö | | **Manage Azure workbooks (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö |+| **[View Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | Γ£ö | Γ£ö |Γ£ö | Γ£ö | +| **[Configure Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | - | Γ£ö |Γ£ö | Γ£ö | + ## Enterprise IoT security |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Features released earlier than nine months ago are described in the [What's new |Service area |Updates | |||-|**Cloud features** | [Alerts GA in the Azure portal](#alerts-ga-in-the-azure-portal) | +| **OT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) <br><br> **Sensor version 22.2.3**: [Configure OT sensor settings from the Azure portal (Public preview)](#configure-ot-sensor-settings-from-the-azure-portal-public-preview) | +| **Enterprise IoT networks** | **Cloud features**: [Alerts page GA in the Azure portal](#alerts-ga-in-the-azure-portal) | ++### Configure OT sensor settings from the Azure portal (Public preview) ++For sensor versions 22.2.3 and higher, you can now configure selected settings for cloud-connected sensors using the new **Sensor settings (Preview)** page, accessed via the Azure portal's **Sites and sensors** page. For example: +++For more information, see [Define and view OT sensor settings from the Azure portal (Public preview)](configure-sensor-settings-portal.md). ### Alerts GA in the Azure portal For more information, see: |Service area |Updates | |||-|**OT networks** | - **Sensor version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors)<br>- **Sensor version 22.2.3**: [Update sensor software from the Azure portal](#update-sensor-software-from-the-azure-portal-public-preview) | +|**OT networks** |**Sensor version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors)<br><br>**Sensor version 22.2.3**: [Update sensor software from the Azure portal](#update-sensor-software-from-the-azure-portal-public-preview) | ++ ### Update sensor software from the Azure portal (Public preview) |
digital-twins | Concepts Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md | A model is similar to a *class* in an object-oriented programming language, defi Models for Azure Digital Twins are defined using the Digital Twins Definition Language (DTDL). -You can view the full language specs for DTDL in GitHub: [Digital Twins Definition Language (DTDL) - Version 2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). This page includes detailed DTDL reference and examples to help you get started writing your own DTDL models. +You can view the full language specs for DTDL in GitHub: [Digital Twins Definition Language (DTDL) - Version 2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). This page includes detailed DTDL reference and examples to help you get started writing your own DTDL models. DTDL is based on JSON-LD and is programming-language independent. DTDL isn't exclusive to Azure Digital Twins, but is also used to represent device data in other IoT services such as [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md). Azure Digital Twins uses DTDL version 2 (use of DTDL version 1 with Azure Digital Twins has now been deprecated). Here are the fields within a model interface: | Field | Description | | | |-| `@id` | A [Digital Twin Model Identifier (DTMI)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. | +| `@id` | A [Digital Twin Model Identifier (DTMI)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#digital-twin-model-identifier) for the model. Must be in the format `dtmi:<domain>:<unique-model-identifier>;<model-version-number>`. | | `@type` | Identifies the kind of information being described. For an interface, the type is `Interface`. | | `@context` | Sets the [context](https://niem.github.io/json/reference/json-ld/context/) for the JSON document. Models should use `dtmi:dtdl:context;2`. | | `displayName` | [optional] Gives you the option to define a friendly name for the model. If you don't use this field, the model will use its full DTMI value.| The main information about a model is given by its attributes, which are defined For more information, see [Components](#components) below. > [!NOTE]-> The [spec for DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) also defines *Commands*, which are methods that can be executed on a digital twin (like a reset command, or a command to switch a fan on or off). However, commands are not currently supported in Azure Digital Twins. +> The [spec for DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) also defines *Commands*, which are methods that can be executed on a digital twin (like a reset command, or a command to switch a fan on or off). However, commands are not currently supported in Azure Digital Twins. ## Properties and telemetry This section goes into more detail about *properties* and *telemetry* in DTDL models. -For comprehensive information about the fields that may appear as part of a property, see [Property in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). For comprehensive information about the fields that may appear as part of telemetry, see [Telemetry in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#telemetry). +For comprehensive information about the fields that may appear as part of a property, see [Property in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). For comprehensive information about the fields that may appear as part of telemetry, see [Telemetry in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#telemetry). > [!NOTE] > The `writable` DTDL attribute for properties is not currently supported in Azure Digital Twins. It can be added to the model, but Azure Digital Twins will not enforce it. For more information, see [Service-specific DTDL notes](#service-specific-dtdl-notes). The following example shows another version of the Home model, with a property f ### Semantic type example -Semantic types make it possible to express a value with a unit. Properties and telemetry can be represented with any of the semantic types that are supported by DTDL. For more information on semantic types in DTDL and what values are supported, see [Semantic types in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#semantic-types). +Semantic types make it possible to express a value with a unit. Properties and telemetry can be represented with any of the semantic types that are supported by DTDL. For more information on semantic types in DTDL and what values are supported, see [Semantic types in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#semantic-type). The following example shows a Sensor model with a semantic-type telemetry for Temperature, and a semantic-type property for Humidity. The following example shows a Sensor model with a semantic-type telemetry for Te This section goes into more detail about *relationships* in DTDL models. -For a comprehensive list of the fields that may appear as part of a relationship, see [Relationship in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#relationship). +For a comprehensive list of the fields that may appear as part of a relationship, see [Relationship in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#relationship). > [!NOTE] > The `writable`, `minMultiplicity`, and `maxMultiplicity` DTDL attributes for relationships are not currently supported in Azure Digital Twins. They can be added to the model, but Azure Digital Twins will not enforce them. For more information, see [Service-specific DTDL notes](#service-specific-dtdl-notes). The following example shows another version of the Home model, where the `rel_ha This section goes into more detail about *components* in DTDL models. -For a comprehensive list of the fields that may appear as part of a component, see [Component in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#component). +For a comprehensive list of the fields that may appear as part of a component, see [Component in the DTDL V2 Reference](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#component). ### Basic component example |
digital-twins | Concepts Ontologies Convert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-convert.md | The following C# code snippet shows how an RDF model file is loaded into a graph ### RDF converter application -There's a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)). +There's a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)). The sample is a [.NET Core command-line application called RdfToDtdlConverter](/samples/azure-samples/rdftodtdlconverter/digital-twins-model-conversion-samples/). |
digital-twins | Concepts Ontologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies.md | Ontologies provide a great starting point for digital twin solutions. They encom Also, using these ontologies in your solutions can set them up for more seamless integration between different partners and vendors, because ontologies can provide a common vocabulary across solutions. -Because models in Azure Digital Twins are represented in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), ontologies for use with Azure Digital Twins are also written in DTDL. +Because models in Azure Digital Twins are represented in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md), ontologies for use with Azure Digital Twins are also written in DTDL. Here are some other benefits to using industry-standard DTDL ontologies as schemas for your twin graphs: * Harmonization of software components, documentation, query libraries, and more |
digital-twins | How To Use Azure Digital Twins Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-azure-digital-twins-explorer.md | To view the property values of a twin or a relationship, select the twin or rela The properties shown in the **Twin Properties** and **Relationship Properties** panels are each displayed with an icon, indicating the type of the field from the DTDL model. You can hover over an icon to display the associated type. -The table below shows the possible data types and their corresponding icons. The table also contains links from each data type to its schema description in the [DTDL spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#schemas). +The table below shows the possible data types and their corresponding icons. The table also contains links from each data type to its schema description in the [DTDL spec](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#schema). | Icon | Data type | | | |-|  | -|  | -|  | -|  | -|  | -|  | -|  | -|  | -|  | -|  | -|  | +|  | +|  | +|  | +|  | +|  | +|  | +|  | +|  | +|  | +|  | +|  | ##### Errors |
digital-twins | How To Use Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-tags.md | A *marker tag* is a simple string that is used to mark or categorize a digital t ### Add marker tags to model -Marker tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) Map from `string` to `boolean`. The boolean `mapValue` is ignored, as the presence of the tag is all that's important. +Marker tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) Map from `string` to `boolean`. The boolean `mapValue` is ignored, as the presence of the tag is all that's important. Here's an excerpt from a twin model implementing a marker tag as a property: A *value tag* is a key-value pair that is used to give each tag a value, such as ### Add value tags to model -Value tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) Map from `string` to `string`. Both the `mapKey` and the `mapValue` are significant. +Value tags are modeled as a [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) Map from `string` to `string`. Both the `mapKey` and the `mapValue` are significant. Here's an excerpt from a twin model implementing a value tag as a property: |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | Take advantage of your domain expertise on top of Azure Digital Twins to build c In Azure Digital Twins, you define the digital entities that represent the people, places, and things in your physical environment using custom twin types called [models](concepts-models.md). -You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe types of entities according to their state properties, telemetry events, commands, components, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry. +You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md), and they describe types of entities according to their state properties, telemetry events, commands, components, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry. >[!TIP] >DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. |
energy-data-services | Troubleshoot Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/troubleshoot-manifest-ingestion.md | + + Title: Troubleshoot manifest ingestion in Microsoft Energy Data Services Preview #Required; this page title is displayed in search results; Always include the word "troubleshoot" in this line. +description: Find out how to troubleshoot manifest ingestion using Airflow task logs #Required; this article description is displayed in search results. ++++ Last updated : 02/06/2023+++# Troubleshoot manifest ingestion issues using Airflow task logs +This article helps you troubleshoot manifest ingestion workflow issues in Microsoft Energy Data Services Preview instance using the Airflow task logs. ++## Manifest ingestion DAG workflow types +The Manifest ingestion workflow is of two types: +- Single manifest +- Batch upload ++### Single manifest +One single manifest file is used to trigger the manifest ingestion workflow. ++|DagTaskName |Description | +||| +|**Update_status_running_task** | Calls Workflow service and marks the status of DAG as running in the database | +|**Check_payload_type** | Validates whether the ingestion is of batch type or single manifest| +|**Validate_manifest_schema_task** | Ensures all the schema kinds mentioned in the manifest are present and there's referential schema integrity. All invalid values will be evicted from the manifest | +|**Provide_manifest_intergrity_task** | Validates references inside the OSDU™ R3 manifest and removes invalid entities. This operator is responsible for parent-child validation. All orphan-like entities will be logged and excluded from the validated manifest. Any external referenced records will be searched and in case not found, the manifest entity will be dropped. All surrogate key references are also resolved | +|**Process_single_manifest_file_task** | Performs ingestion of the final obtained manifest entities from the previous step, data records will be ingested via the storage service | +|**Update_status_finished_task** | Calls workflow service and marks the status of DAG as `finished` or `failed` in the database | ++### Batch upload +Multiple manifest files are part of the same workflow service request, that is, the manifest section in the request payload is a list instead of a dictionary of items. ++|DagTaskName |Description | +||| +|**Update_status_running_task** | Calls Workflow service and marks the status of DAG as running in the database | +|**Check_payload_type** | Validates whether the ingestion is of batch type or single manifest| +|**Batch_upload** | List of manifests are divided into three batches to be processed in parallel (no task logs are emitted) | +|**Process_manifest_task_(1 / 2 / 3)** | List of manifests is divided into groups of three and processed by these tasks. All the steps performed in Validate_manifest_schema_task, Provide_manifest_intergrity_task, Process_single_manifest_file_task are condensed and performed sequentially in these tasks | +|**Update_status_finished_task** | Calls workflow service and marks the status of DAG as `finished` or `failed` in the database | + +Based on the payload type (single or batch), `check_payload_type` task will pick the appropriate branch and the tasks in the other branch will be skipped. +++## Prerequisites +You should have integrated airflow task logs with Azure monitor. See [Integrate airflow logs with Azure Monitor](how-to-integrate-airflow-logs-with-azure-monitor.md) ++Following columns are exposed in Airflow Task Logs for you to debug the issue: ++|Parameter Name |Description | +||| +|**Run Id** | Unique run ID of the DAG run, which was triggered | +|**Correlation ID** | Unique correlation ID of the DAG run (same as run ID) | +|**DagName** | DAG workflow name. For instance, `Osdu_ingest` for manifest ingestion | +|**DagTaskName** | DAG workflow task name. For instance, `Update_status_running_task` for manifest ingestion | +|**Content** | Contains error log messages (errors/exceptions) emitted by Airflow during the task execution| +|**LogTimeStamp** | Captures the time interval of DAG runs | +|**LogLevel** | DEBUG/INFO/WARNING/ERROR. Mostly all exception and error messages can be seen by filtering at ERROR level | +++## Cause 1: A DAG run has failed in the Update_status_running_task or Update_status_finished_task +The workflow run has failed and the data records weren't ingested. ++**Possible reasons** +* Provided incorrect data partition ID +* Provided incorrect key name in the execution context of the request body +* Workflow service isn't running or throwing 5xx errors ++**Workflow status** +* Workflow status is marked as `failed`. ++### Solution: Check the airflow task logs for `update_status_running_task` or `update_status_finished_task`. Fix the payload (pass the correct data partition ID or key name) ++**Sample Kusto query** +```kusto + AirflowTaskLogs + | where DagName == "Osdu_ingest" + | where DagTaskName == "update_status_running_task" + | where LogLevel == "ERROR" // ERROR/DEBUG/INFO/WARNING + | where RunID == '<run_id>' +``` ++**Sample trace output** +```md + [2023-02-05, 12:21:54 IST] {taskinstance.py:1703} ERROR - Task failed with exception + Traceback (most recent call last): + File "/home/airflow/.local/lib/python3.8/site-packages/osdu_ingestion/libs/context.py", line 50, in populate + data_partition_id = ctx_payload['data-partition-id'] + KeyError: 'data-partition-id' + + requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://it1672283875.oep.ppe.azure-int.net/api/workflow/v1/workflow/Osdu_ingest/workflowRun/e9a815f2-84f5-4513-9825-4d37ab291264 +``` ++## Cause 2: Schema validation failures +Records weren't ingested due to schema validation failures. ++**Possible reasons** +* Schema not found errors +* Manifest body not conforming to the schema kind +* Incorrect schema references +* Schema service throwing 5xx errors + +**Workflow Status** +* Workflow status is marked as `finished`. No failure in the workflow status will be observed because the invalid entities are skipped and the ingestion is continued. ++### Solution: Check the airflow task logs for `validate_manifest_schema_task` or `process_manifest_task`. Fix the payload (pass the correct data partition ID or key name) ++**Sample Kusto query** +```kusto + AirflowTaskLogs + | where DagName has "Osdu_ingest" + | where DagTaskName == "validate_manifest_schema_task" or DagTaskName has "process_manifest_task" + | where LogLevel == "ERROR" + | where RunID == "<run_id>" + | order by ['time'] asc +``` ++**Sample trace output** +```md + Error traces to look out for + [2023-02-05, 14:55:37 IST] {connectionpool.py:452} DEBUG - https://it1672283875.oep.ppe.azure-int.net:443 "GET /api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0 HTTP/1.1" 404 None + [2023-02-05, 14:55:37 IST] {authorization.py:137} ERROR - {"error":{"code":404,"message":"Schema is not present","errors":[{"domain":"global","reason":"notFound","message":"Schema is not present"}]}} + [2023-02-05, 14:55:37 IST] {validate_schema.py:170} ERROR - Error on getting schema of kind 'osdu:wks:work-product-component--WellLog:2.2.0' + [2023-02-05, 14:55:37 IST] {validate_schema.py:171} ERROR - 404 Client Error: Not Found for url: https://it1672283875.oep.ppe.azure-int.net/api/schema-service/v1/schema/osdu:wks:work-product-component--WellLog:2.2.0 + [2023-02-05, 14:55:37 IST] {validate_schema.py:314} WARNING - osdu:wks:work-product-component--WellLog:2.2.0 is not present in Schema service. + [2023-02-05, 15:01:23 IST] {validate_schema.py:322} ERROR - Schema validation error. Data field. + [2023-02-05, 15:01:23 IST] {validate_schema.py:323} ERROR - Manifest kind: osdu:wks:work-product-component--WellLog:1.1.0 + [2023-02-05, 15:01:23 IST] {validate_schema.py:324} ERROR - Error: 'string-value' is not of type 'number' + + Failed validating 'type' in schema['properties']['data']['allOf'][3]['properties']['SamplingStop']: + {'description': 'The stop value/last value of the ReferenceCurveID, ' + 'typically the end depth of the logging.', + 'example': 7500, + 'title': 'Sampling Stop', + 'type': 'number', + 'x-osdu-frame-of-reference': 'UOM'} + + On instance['data']['SamplingStop']: + 'string-value' +``` ++## Cause 3: Failed reference checks +Records weren't ingested due to failed reference checks. ++**Possible reasons** +* Failed to find referenced records +* Parent records not found +* Search service throwing 5xx errors + +**Workflow Status** +* Workflow status is marked as `finished`. No failure in the workflow status will be observed because the invalid entities are skipped and the ingestion is continued. ++### Solution: Check the airflow task logs for `provide_manifest_integrity_task` or `process_manifest_task`. ++**Sample Kusto query** +```kusto + AirflowTaskLogs + | where DagName has "Osdu_ingest" + | where DagTaskName == "provide_manifest_integrity_task" or DagTaskName has "process_manifest_task" + | where Content has 'Search query "'or Content has 'response ids: [' + | where RunID has "<run_id>" +``` ++**Sample trace output** +Since there are no such error logs specifically for referential integrity tasks, you should watch out for the debug log statements to see whether all external records were fetched using the search service. ++For instance, the output shows record queried using the Search service for referential integrity +```md + [2023-02-05, 19:14:40 IST] {search_record_ids.py:75} DEBUG - Search query "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e140838c297f0e6559" OR "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559" OR "it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a" +``` +The records that were retrieved and were in the system are shown in the output. The related manifest object that referenced a record would be dropped and no longer be ingested if we noticed that some of the records weren't present. ++```md + [2023-02-05, 19:14:40 IST] {search_record_ids.py:141} DEBUG - response ids: ['it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a:1675590506723615', 'it1672283875-dp1:work-product-component--WellLog:5ab388ae0e1b40838c297f0e6559758a '] +``` +In the coming release, we plan to enhance the logs by appropriately logging skipped records with reasons ++## Cause 4: Invalid Legal Tags/ACLs in manifest +Records weren't ingested due to invalid legal tags or ACLs present in the manifest. ++**Possible reasons** +* Incorrect ACLs +* Incorrect legal tags +* Storage service throws 5xx errors + +**Workflow Status** +* Workflow status is marked as `finished`. No failure in the workflow status will be observed. ++### Solution: Check the airflow task logs for `process_single_manifest_file_task` or `process_manifest_task`. ++**Sample Kusto query** +```kusto + AirflowTaskLogs + | where DagName has "Osdu_ingest" + | where DagTaskName == "process_single_manifest_file_task" or DagTaskName has "process_manifest_task" + | where LogLevel == "ERROR" + | where RunID has "<run_id>" + | order by ['time'] asc +``` ++**Sample trace output** ++```md + "PUT /api/storage/v2/records HTTP/1.1" 400 None + [2023-02-05, 16:57:05 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Invalid legal tags","message":"Invalid legal tags: it1672283875-dp1-R3FullManifest-Legal-Tag-Test779759112"} + +``` +and the output indicates records that were retrieved. Manifest entity records corresponding to missing search records will get dropped and not ingested. ++```md + "PUT /api/storage/v2/records HTTP/1.1" 400 None + [2023-02-05, 16:58:46 IST] {authorization.py:137} ERROR - {"code":400,"reason":"Validation error.","message":"createOrUpdateRecords.records[0].acl: Invalid group name 'data1.default.viewers@it1672283875-dp1.dataservices.energy'"} + [2023-02-05, 16:58:46 IST] {single_manifest_processor.py:83} WARNING - Can't process entity SRN: surrogate-key:0ef20853-f26a-456f-b874-3f2f5f35b6fb +``` ++## Known issues +- Exception traces weren't exporting with Airflow Task Logs due to a known problem in the logs; the patch has been submitted and will be included in the February release. +- Since there are no specific error logs for referential integrity tasks, you must manually search for the debug log statements to see whether all external records were retrieved via the search service. We intend to improve the logs in the upcoming release by properly logging skipped data with justifications. +++## Next steps +Advance to the manifest ingestion tutorial and learn how to perform a manifest-based file ingestion +> [!div class="nextstepaction"] +> [Tutorial: Sample steps to perform a manifest-based file ingestion](tutorial-manifest-ingestion.md) ++## Reference +- [Manifest-based ingestion concepts](concepts-manifest-ingestion.md) +- [Ingestion DAGs](https://community.opengroup.org/osdu/platform/data-flow/ingestion/ingestion-dags/-/blob/master/README.md#operators-description) |
event-hubs | Event Hubs Geo Dr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-geo-dr.md | Note the following considerations to keep in mind: 5. Synchronizing entities can take some time, approximately 50-100 entities per minute. +6. Some aspects of the management plane for the secondary namespace become read-only while geo-recovery pairing is active. ++7. The data plane of the secondary namespace will be read-only while geo-recovery pairing is active. The data plane of the secondary namespace will accept GET requests to enable validation of client connectivity and access controls. + ## Availability Zones Event Hubs supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within an Azure region. The Availability Zones support is only available in [Azure regions with availability zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). Both metadata and data (events) are replicated across data centers in the availability zone. |
firewall | Deploy Availability Zone Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-availability-zone-powershell.md | This feature enables the following scenarios: - You can increase availability to 99.99% uptime. For more information, see the Azure Firewall [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/azure-firewall/v1_0/). The 99.99% uptime SLA is offered when two or more Availability Zones are selected. - You can also associate Azure Firewall to a specific zone just for proximity reasons, using the service standard 99.95% SLA. -For more information about Azure Firewall Availability Zones, see [What is Azure Firewall?](overview.md) +For more information about Azure Firewall Availability Zones, see [Azure Firewall Standard features](features.md#availability-zones). The following Azure PowerShell example shows how you can deploy an Azure Firewall with Availability Zones. |
firewall | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md | Azure Firewall can be configured during deployment to span multiple Availability You can also associate Azure Firewall to a specific zone just for proximity reasons, using the service standard 99.95% SLA. -There's no additional cost for a firewall deployed in more than one Availability Zone. However, there are added costs for inbound and outbound data transfers associated with Availability Zones. For more information, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). +There's no extra cost for a firewall deployed in more than one Availability Zone. However, there are added costs for inbound and outbound data transfers associated with Availability Zones. For more information, see [Bandwidth pricing details](https://azure.microsoft.com/pricing/details/bandwidth/). ++As the firewall scales, it creates instances in the zones it's in. So, if the firewall is in Zone 1 only, new instances are created in Zone 1. If the firewall is in all three zones, then it creates instances across the three zones as it scales. Azure Firewall Availability Zones are available in regions that support Availability Zones. For more information, see [Regions that support Availability Zones in Azure](../availability-zones/az-region.md) Azure Firewall can also resolve names using Azure Private DNS. The virtual netwo You can use fully qualified domain names (FQDNs) in network rules based on DNS resolution in Azure Firewall and Firewall Policy. -The specified FQDNs in your rule collections are translated to IP addresses based on your firewall DNS settings. This capability allows you to filter outbound traffic using FQDNs with any TCP/UDP protocol (including NTP, SSH, RDP, and more). As this capability is based on DNS resolution, it is highly recommended you enable the DNS proxy to ensure name resolution is consistent with your protected virtual machines and firewall. +The specified FQDNs in your rule collections are translated to IP addresses based on your firewall DNS settings. This capability allows you to filter outbound traffic using FQDNs with any TCP/UDP protocol (including NTP, SSH, RDP, and more). As this capability is based on DNS resolution, it's highly recommended you enable the DNS proxy to ensure name resolution is consistent with your protected virtual machines and firewall. ## Deploy Azure Firewall without public IP address in Forced Tunnel mode The Azure Firewall service requires a public IP address for operational purposes. While secure, some deployments prefer not to expose a public IP address directly to the Internet. -In such cases, you can deploy Azure Firewall in Forced Tunnel mode. This configuration creates a management NIC which is used by Azure Firewall for its operations. The Tenant Datapath network can be configured without a public IP address, and Internet traffic can be forced tunneled to another firewall or completely blocked. +In such cases, you can deploy Azure Firewall in Forced Tunnel mode. This configuration creates a management NIC that is used by Azure Firewall for its operations. The Tenant Datapath network can be configured without a public IP address, and Internet traffic can be forced tunneled to another firewall or completely blocked. -Forced Tunnel mode cannot be configured at run time. You can either redeploy the Firewall or use the stop and start facility to reconfigure an existing Azure Firewall in Forced Tunnel mode. Firewalls deployed in Secure Hubs are always deployed in Forced Tunnel mode. +Forced Tunnel mode can't be configured at run time. You can either redeploy the Firewall or use the stop and start facility to reconfigure an existing Azure Firewall in Forced Tunnel mode. Firewalls deployed in Secure Hubs are always deployed in Forced Tunnel mode. ## Outbound SNAT support This enables the following scenarios: ## Azure Monitor logging -All events are integrated with Azure Monitor, allowing you to archive logs to a storage account, stream events to your Event Hub, or send them to Azure Monitor logs. For Azure Monitor log samples, see [Azure Monitor logs for Azure Firewall](./firewall-workbook.md). +All events are integrated with Azure Monitor, allowing you to archive logs to a storage account, stream events to your event hub, or send them to Azure Monitor logs. For Azure Monitor log samples, see [Azure Monitor logs for Azure Firewall](./firewall-workbook.md). For more information, see [Tutorial: Monitor Azure Firewall logs and metrics](./firewall-diagnostics.md). |
firewall | Threat Intel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/threat-intel.md | You can choose to just log an alert when a rule is triggered, or you can choose By default, threat intelligence-based filtering is enabled in alert mode. You canΓÇÖt turn off this feature or change the mode until the portal interface becomes available in your region. +You can define allowlists so threat intelligence won't filter traffic to any of the listed FQDNs, IP addresses, ranges, or subnets. ++For a batch operation, you can upload a CSV file with list of IP addresses, ranges, and subnets. + ## Logs The following log excerpt shows a triggered rule: - **Outbound testing** - Outbound traffic alerts should be a rare occurrence, as it means that your environment has been compromised. To help test outbound alerts are working, a test FQDN has been created that triggers an alert. Use `testmaliciousdomain.eastus.cloudapp.azure.com` for your outbound tests. -- **Inbound testing** - You can expect to see alerts on incoming traffic if DNAT rules are configured on the firewall. This is true even if only specific sources are allowed on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that are known to also engage in malicious activity.+- **Inbound testing** - You can expect to see alerts on incoming traffic if DNAT rules are configured on the firewall. You'll see alerts even if only specific sources are allowed on the DNAT rule and traffic is otherwise denied. Azure Firewall doesn't alert on all known port scanners; only on scanners that are known to also engage in malicious activity. ## Next steps |
healthcare-apis | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md | Refer to the table below to find details about resolution dates or possible work | :- | : | :- | :- | |Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |No workaround | Not resolved | |The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |-| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | No workaround|Not resolved | +| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | No workaround|August 2022 Resolved | ## Next steps |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md | ->[Note] > Azure Health Data Services is Generally Available. > >For more information about Azure Health Data Services Service Level Agreements, see [SLA for Azure Health Data Services](https://azure.microsoft.com/support/legal/sla/health-data-services/v1_1/). Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.-## December 2022 +## **December 2022** -### Azure Health Data Services +#### Azure Health Data Services **Azure Health Data services General Available (GA) in new regions** General availability (GA) of Azure Health Data services in France Central, North Central US and Qatar Central Regions. -### DICOM service +#### DICOM service **DICOM Events available in public preview** General availability (GA) of Azure Health Data services in France Central, North Azure Health Data Services [Events](events/events-overview.md) now include a public preview of [two new event types](events/events-message-structure.md#dicom-events-message-structure) for the DICOM service. These new event types enable applications that use Event Grid to use event-driven workflows when DICOM images are created or deleted. -## November 2022 -### FHIR service +## **November 2022** +#### FHIR service **Fixed the Error generated when resource is updated using if-match header and PATCH** Bug is now fixed and Resource will be updated if matches the Etag header. For details , see [#2877](https://github.com/microsoft/fhir-server/issues/2877) -### Toolkit and Samples Open Source +#### Toolkit and Samples Open Source **Azure Health Data Services Toolkit is released** The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit), which was previously in a pre-release state, is now in **Public Preview** . The toolkit is open-source project and allows customers to more easily customize and extend the functionality of their Azure Health Data Services implementations. The NuGet packages of the toolkit are available for download from the NuGet gallery, and you can find links to them from the repo documentation. -## October 2022 -### MedTech service +## **October 2022** +#### MedTech service **Added Deploy to Azure button** The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-heal Customers can now determine if their mappings are working as intended, as they can now see dropped events as a metric to ensure that data is flowing through accurately. -## September 2022 +## **September 2022** -### Azure Health Data Services +#### Azure Health Data Services -#### **Bug Fixes** +**Fixed issue where Querying with :not operator was returning more results than expected** -| Bug Fix |Related information | -| :- | : | -| Querying with :not operator was returning more results than expected | The issue is now fixed and querying with :not operator should provide correct results. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2785). | +The issue is now fixed and querying with :not operator should provide correct results. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2785). | -#### **Known Issues** -| Known Issue | Description | -| : | :- | -| Using [token type](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.hl7.org%2Ffhir%2Fsearch.html%23token&data=05%7C01%7CKetki.Sheth%40microsoft.com%7C7ec4c7dad9b940b74a8508da60395511%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637928096596122743%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=VmAG2iHUyxtNZI88HqKeSSwFV28zFSs2qgkAQRPnZ%2Bw%3D&reserved=0) fields of length more than 128 characters can result in undesired behavior on create, search, update, and delete operations | No workaround. | -### FHIR Service +#### FHIR Service -#### **Bug Fixes** -| Bug Fix |Related information | -| :- | : | -| Error message is provided for failure in export resulting from long time span | With failure in export job due to a long time span, customer will see `RequestEntityTooLarge` HTTP status code. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2790).| -|In a query sort, functionality throws an error when chained search is performed with same field value. | Sort functionality returns a response. For more information, see [#2794](https://github.com/microsoft/fhir-server/pull/2794). -| Server doesn't indicate `_text` not supported | When passed as URL parameter,`_text` returns an error in response when using the `Prefer` heading with `value handling=strict`. For more information, see [#2779](https://github.com/microsoft/fhir-server/pull/2779). | -| Verbose error message is not provided for invalid resource type | Verbose error message is added when resource type is invalid or empty for `_include` and `_revinclude` searches. For more information, see [#2776](https://github.com/microsoft/fhir-server/pull/2776). +**Provided an Error message for failure in export resulting from long time span** -### DICOM service +With failure in export job due to a long time span, customer will see `RequestEntityTooLarge` HTTP status code. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2790). -#### **Features** +**Fixed issue in a query sort, where functionality throws an error when chained search is performed with same field value.** -| Enhancements/Improvements | Related information | -| : | :- | -| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](dicom/export-dicom-files.md). | -|Improved deployment performance |Performance improvements have cut the time to deploy new instances of the DICOM service by more than 55% at the 50th percentile. | -| Reduced strictness when validating STOW requests |Some customers have run into issues storing DICOM files that do not perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW. <p>The service will now accept the following: <p><ul><li>DICOM UIDs that contain trailing whitespace <li>IS, DS, SV, and UV VRs that are not valid numbers<li>Invalid private creator tags | +The functionality now returns a response. For more information, see [#2794](https://github.com/microsoft/fhir-server/pull/2794). -### Toolkit and Samples Open Source +**Fixed issue where Server doesn't indicate `_text` not supported** -#### **Features** + When passed as URL parameter,`_text` returns an error in response when using the `Prefer` heading with `value handling=strict`. For more information, see [#2779](https://github.com/microsoft/fhir-server/pull/2779). -| Enhancements/Improvements | Related information | -| : | :- | -| Azure Health Data Services Toolkit | The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit) is now in the public preview. The toolkit is open-source and allows to easily customize and extend the functionality of their Azure Health Data Services implementations. | +**Added a Verbose error message for invalid resource type** -## August 2022 +Verbose error message is added when resource type is invalid or empty for `_include` and `_revinclude` searches. For more information, see [#2776](https://github.com/microsoft/fhir-server/pull/2776). -### FHIR service +#### DICOM service -#### **Features** -| Enhancements | Related information | -| : | :- | -| Azure Health Data services availability expands to new regions | Azure Health Data services is now available in the following regions: Central India, Korea Central, and Sweden Central. -| `$import` is generally available. | `$import` API is now generally available in Azure Health Data Services API version 2022-06-01. See [Executing the import](./../healthcare-apis/fhir/import-data.md) by invoking the `$import` operation on FHIR service in Azure Health Data Services. -| `$convert-data` updated by adding STU3-R4 support. |`$convert-data` added support for FHIR STU3-R4 conversion. See [Data conversion for Azure API for FHIR](./../healthcare-apis/azure-api-for-fhir/convert-data.md). | -| Analytics pipeline now supports data filtering. | Data filtering is now supported in FHIR to data lake pipeline. See [FHIR-Analytics-Pipelines_Filter FHIR data](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Filter%20FHIR%20data%20in%20pipeline.md) microsoft/FHIR-Analytics-Pipelines github.com. | -| Analytics pipeline now supports FHIR extensions. | Analytics pipeline can process FHIR extensions to generate parquet data. See [FHIR-Analytics-Pipelines_Process](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Process%20FHIR%20extensions.md) in pipeline.md at main. +**Export is Generally Available (GA)** -#### **Bug fixes** +The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](dicom/export-dicom-files.md). -|Bug fixes |Related information | -| :-- | : | -| History bundles were sorted with the oldest version first |We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per FHIR specification, the sorting of versions defaults to the oldest version last. <br><br>This bug fix, addresses FHIR server behavior for sorting history bundle. <br><br>We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP GET command utilized for retrieving history. <br><br>For example: `<server URL>/_history?_sort=_lastUpdated` <br><br>For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689).  | -| Queries not providing consistent result count after appended with `_sort` operator. | Issue is now fixed and queries should provide consistent result count, with and without sort operator. +**Improved deployment performance** -#### **Known issues** +Performance improvements have cut the time to deploy new instances of the DICOM service by more than 55% at the 50th percentile. -| Known Issue | Description | -| : | :- | -| Using [token type fields](https://www.hl7.org/fhir/search.html#token) of more than 128 characters in length can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | Currently, no workaround available. | +**Reduced strictness when validating STOW requests** -For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md). +Some customers have run into issues storing DICOM files that do not perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW. -### MedTech service +The service will now accept the following: +* DICOM UIDs that contain trailing whitespace +* IS, DS, SV, and UV VRs that are not valid numbers +* Invalid private creator tags -#### **Features and enhancements** +#### Toolkit and Samples Open Source -|Enhancements | Related information | -| : | :- | -|New Metric Chart |Customers can now see predefined metrics graphs in the MedTech landing page, complete with alerts to ease customers' burden of monitoring their MedTech service. | -|Availability of Diagnostic Logs |There are now pre-defined queries with relevant logs for common issues so that customers can easily debug and diagnose issues in their MedTech service. | +**The [Azure Health Data Services Toolkit](https://github.com/microsoft/azure-health-data-services-toolkit) is now in the public preview.** -### DICOM service +The toolkit is open-source and allows to easily customize and extend the functionality of their Azure Health Data Services implementations. -#### **Features and enhancements** +## **August 2022** -|Enhancements | Related information | -| : | :- | -|Modality worklists (UPS-RS) is GA. |The modality worklists (UPS-RS) service is now generally available. Learn more about the [worklists service](./../healthcare-apis/dicom/dicom-services-conformance-statement.md). | +#### FHIR service -## July 2022 +**Azure Health Data services availability expands to new regions** ++ Azure Health Data services is now available in the following regions: Central India, Korea Central, and Sweden Central. + +**`$import` is Generally Available.** ++ `$import` API is now generally available in Azure Health Data Services API version 2022-06-01. See [Executing the import](./../healthcare-apis/fhir/import-data.md) by invoking the `$import` operation on FHIR service in Azure Health Data Services. ++**`$convert-data` updated by adding STU3-R4 support.** ++`$convert-data` added support for FHIR STU3-R4 conversion. See [Data conversion for Azure API for FHIR](./../healthcare-apis/azure-api-for-fhir/convert-data.md). ++ +**Analytics pipeline now supports data filtering.** ++ Data filtering is now supported in FHIR to data lake pipeline. See [FHIR-Analytics-Pipelines_Filter FHIR data](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Filter%20FHIR%20data%20in%20pipeline.md) microsoft/FHIR-Analytics-Pipelines github.com. +++**Analytics pipeline now supports FHIR extensions.** ++Analytics pipeline can process FHIR extensions to generate parquet data. See [FHIR-Analytics-Pipelines_Process](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Process%20FHIR%20extensions.md) in pipeline.md at main. -### FHIR service -#### **Bug fixes** +**Fixed issue related to History bundles being sorted with the oldest version first.** ++We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per FHIR specification, the sorting of versions defaults to the oldest version last.This bug fix, addresses FHIR server behavior for sorting history bundle. <br><br>We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP GET command utilized for retrieving history. <br><br>For example: `<server URL>/_history?_sort=_lastUpdated` <br><br>For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689). ++**Fixed issue where Queries were not providing consistent result count after appended with `_sort` operator.** +The issue is now fixed and queries should provide consistent result count, with and without sort operator. +++#### MedTech service +++**Added New Metric Chart** ++Customers can now see predefined metrics graphs in the MedTech landing page, complete with alerts to ease customers' burden of monitoring their MedTech service. ++**Availability of Diagnostic Logs** ++There are now pre-defined queries with relevant logs for common issues so that customers can easily debug and diagnose issues in their MedTech service. ++#### DICOM service +++**Modality worklists (UPS-RS) is Generally Available (GA)**. ++The modality worklists (UPS-RS) service is now generally available. Learn more about the [worklists service](./../healthcare-apis/dicom/dicom-services-conformance-statement.md). ++## July 2022 -|Bug fixes |Related information | -| :-- | : | -| (Open Source) History bundles were sorted with the oldest version first. | We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per [FHIR specification](https://hl7.org/fhir/http.html#history), the sorting of versions defaults to the oldest version last. This bug fix, addresses FHIR server behavior for sorting history bundle.<br /><br />We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP `GET` command utilized for retrieving history. <br /><br />For example: `<server URL>/_history?_sort=_lastUpdated` <br /><br />For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689). +#### FHIR service -#### **Known issues** -| Known Issue | Description | -| : | :- | -| Using [token type fields](https://www.hl7.org/fhir/search.html#token) of more than 128 characters in length can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | Currently, no workaround available. | -| Queries not providing consistent result count after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | Currently, no workaround available.| +**(Open Source) History bundles were sorted with the oldest version first.** +We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per [FHIR specification](https://hl7.org/fhir/http.html#history), the sorting of versions defaults to the oldest version last. This bug fix, addresses FHIR server behavior for sorting history bundle.<br /><br />We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP `GET` command utilized for retrieving history. <br /><br />For example: `<server URL>/_history?_sort=_lastUpdated` <br /><br />For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689). -For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md). ### MedTech service -#### **Improvements** +**Improvements to documentations for Events and MedTech and availability zones.** -|Azure Health Data Services |Related information | -| :-- | : | -|Improvements to documentations for Events and MedTech and availability zones. |Tested and enhanced usability and functionality. Added new documents to enable customers to better take advantage of the new improvements. See [Consume Events with Logic Apps](./../healthcare-apis/events/events-deploy-portal.md) and [Deploy Events Using the Azure portal](./../healthcare-apis/events/events-deploy-portal.md). | -|One touch launch Azure MedTech deploy. |[Deploy the MedTech Service in the Azure portal](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md)| +Tested and enhanced usability and functionality. Added new documents to enable customers to better take advantage of the new improvements. See [Consume Events with Logic Apps](./../healthcare-apis/events/events-deploy-portal.md) and [Deploy Events Using the Azure portal](./../healthcare-apis/events/events-deploy-portal.md). -### DICOM service ++**One touch launch Azure MedTech deploy.** ++[Deploy the MedTech Service in the Azure portal](./../healthcare-apis/iot/deploy-iot-connector-in-azure.md) ++#### DICOM service #### **Features** -|Enhancements | Related information | -| : | :- | -|DICOM Service availability expands to new regions. | The DICOM Service is now available in the following [regions](https://azure.microsoft.com/global-infrastructure/services/): Southeast Asia, Central India, Korea Central, and Switzerland North. | -|Fast retrieval of individual DICOM frames | For DICOM images containing multiple frames, performance improvements have been made to enable fast retrieval of individual frames (60 KB frames as fast as 60 MS). These improved performance characteristics enable workflows such as [viewing digital pathology images](https://microsofthealth.visualstudio.com/DefaultCollection/Health/_git/marketing-azure-docs?version=GBmain&path=%2Fimaging%2Fdigital-pathology%2FDigital%20Pathology%20using%20Azure%20DICOM%20service.md&_a=preview), which require rapid retrieval of individual frames. | +**DICOM Service availability expands to new regions.** ++The DICOM Service is now available in the following [regions](https://azure.microsoft.com/global-infrastructure/services/): Southeast Asia, Central India, Korea Central, and Switzerland North. ++**Fast retrieval of individual DICOM frames** ++For DICOM images containing multiple frames, performance improvements have been made to enable fast retrieval of individual frames (60 KB frames as fast as 60 MS). These improved performance characteristics enable workflows such as [viewing digital pathology images](https://microsofthealth.visualstudio.com/DefaultCollection/Health/_git/marketing-azure-docs?version=GBmain&path=%2Fimaging%2Fdigital-pathology%2FDigital%20Pathology%20using%20Azure%20DICOM%20service.md&_a=preview), which require rapid retrieval of individual frames. ## June 2022 -### FHIR service +#### FHIR service ++**Fixed issue with Export Job not being queued for execution.** +Fixes issue with export job not being queued due to duplicate job definition caused due to reference to container URL. For more information, see [#2648](https://github.com/microsoft/fhir-server/pull/2648). -#### **Bug fixes** +**Fixed issue related to Queries not providing consistent result count after appended with the `_sort` operator.** -|Bug fixes |Related information | -| :-- | : | -|Export Job not being queued for execution. |Fixes issue with export job not being queued due to duplicate job definition caused due to reference to container URL. For more information, see [#2648](https://github.com/microsoft/fhir-server/pull/2648). | -|Queries not providing consistent result count after appended with the `_sort` operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | +Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). ## May 2022 -### FHIR service +#### FHIR service -#### **Bug fixes** +**Removes SQL retry on upsert** -|Bug fixes |Related information | -| :-- | : | -|Removes SQL retry on upsert |Removes retry on SQL command for upsert. The error still occurs, but data is saved correctly in success cases. For more information, see [#2571](https://github.com/microsoft/fhir-server/pull/2571). | -|Added handling for SqlTruncate errors |Added a check for SqlTruncate exceptions and tests. In particular, exceptions and tests will catch SqlTruncate exceptions for Decimal type based on the specified precision and scale. For more information, see [#2553](https://github.com/microsoft/fhir-server/pull/2553). | +Removes retry on SQL command for upsert. The error still occurs, but data is saved correctly in success cases. For more information, see [#2571](https://github.com/microsoft/fhir-server/pull/2571). -### DICOM service +**Added handling for SqlTruncate errors** -#### **Features** +Added a check for SqlTruncate exceptions and tests. In particular, exceptions and tests will catch SqlTruncate exceptions for Decimal type based on the specified precision and scale. For more information, see [#2553](https://github.com/microsoft/fhir-server/pull/2553). ++#### DICOM service ++**DICOM service supports cross-origin resource sharing (CORS)** ++DICOM service now supports [CORS](./../healthcare-apis/dicom/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. ++**DICOMcast supports Private Link** ++DICOMcast has been updated to support Azure Health Data Services workspaces that have been configured to use [Private Link](./../healthcare-apis/healthcare-apis-configure-private-link.md). ++**UPS-RS supports Change and Retrieve work item** -|Enhancements | Related information | -| : | :- | -|DICOM service supports cross-origin resource sharing (CORS) |DICOM service now supports [CORS](./../healthcare-apis/dicom/configure-cross-origin-resource-sharing.md). CORS allows you to configure settings so that applications from one domain (origin) can access resources from a different domain, known as a cross-domain request. | -|DICOMcast supports Private Link |DICOMcast has been updated to support Azure Health Data Services workspaces that have been configured to use [Private Link](./../healthcare-apis/healthcare-apis-configure-private-link.md). | -|UPS-RS supports Change and Retrieve work item |Modality worklist (UPS-RS) endpoints have been added to support Change and Retrieve operations for work items. | -|API version is now required as part of the URI |All REST API requests to the DICOM service must now include the API version in the URI. For more information, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). | +Modality worklist (UPS-RS) endpoints have been added to support Change and Retrieve operations for work items. -#### **Bug fixes** +**API version is now required as part of the URI** -|Bug fixes |Related information | -| :-- | : | -|Index the first value for DICOM tags that incorrectly specify multiple values |Attributes that are defined to have a single value but have specified multiple values will now be leniently accepted. The first value for such attributes will be indexed. | +All REST API requests to the DICOM service must now include the API version in the URI. For more information, see [API versioning for DICOM service](./../healthcare-apis/dicom/api-versioning-dicom-service.md). ++**Index the first value for DICOM tags that incorrectly specify multiple values** ++Attributes that are defined to have a single value but have specified multiple values will now be leniently accepted. The first value for such attributes will be indexed. ## April 2022 -### FHIR service +#### FHIR service -#### **Features and enhancements** -|Enhancements |Related information | -| :- | : | -|FHIRPath Patch |FHIRPath Patch was added as a feature to both the Azure API for FHIR. This implements FHIRPath Patch as defined on the [HL7](http://hl7.org/fhir/fhirpatch.html) website. | -|Handles invalid header on versioned update |When the versioning policy is set to "versioned-update", we required that the most recent version of the resource is provided in the request's if-match header on an update. The specified version must be in ETag format. Previously, a 500 would be returned if the version was invalid or in an incorrect format. This update now returns a 400 Bad Request. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). | -|Bulk import in public preview |The bulk-import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. It's designed for initial data load into the FHIR server. For more information, see [Bulk-import FHIR data (Preview)](./../healthcare-apis/fhir/import-data.md). | +** Added FHIRPath Patch** +FHIRPath Patch was added as a feature to both the Azure API for FHIR. This implements FHIRPath Patch as defined on the [HL7](http://hl7.org/fhir/fhirpatch.html) website. -#### **Bug fixes** -|Bug fixes |Related information | -| :-- | : | -|Adds core to resource path |Part of the path to a string resource was accidentally removed in the versioning policy. This fix adds it back in. For more information, see [PR #2470](https://github.com/microsoft/fhir-server/pull/2470). | +**Handles invalid header on versioned update** -#### **Known issues** + When the versioning policy is set to "versioned-update", we required that the most recent version of the resource is provided in the request's if-match header on an update. The specified version must be in ETag format. Previously, a 500 would be returned if the version was invalid or in an incorrect format. This update now returns a 400 Bad Request. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). -For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md). -### DICOM service +**Bulk import in public preview** +The bulk-import feature enables importing FHIR data to the FHIR server at high throughput using the $import operation. It's designed for initial data load into the FHIR server. For more information, see [Bulk-import FHIR data (Preview)](./../healthcare-apis/fhir/import-data.md). -#### **Bug fixes** -|Bug fixes |Related information | -| :-- | : | -|Reduce the strictness of validation applied to incoming DICOM files |When value representation (VR) is a decimal string (DS)/ integer string (IS), `fo-dicom` serialization treats value as a number. Customer DICOM files could be old and contains invalid numbers. Our service blocks such file upload due to the serialization exception. For more information, see [PR #1450](https://github.com/microsoft/dicom-server/pull/1450). | -|Correctly parse a range of input in the content negotiation headers |Currently, WADO with Accept: multipart/related; type=application/dicom will throw an error. It will accept Accept: multipart/related; type="application/dicom", but they should be equivalent. For more information, see [PR #1462](https://github.com/microsoft/dicom-server/pull/1462). | -|Fixed an issue where parallel upload of images in a study could fail under certain circumstances |Handle race conditions during parallel instance inserts in the same study. For more information, see [PR #1491](https://github.com/microsoft/dicom-server/pull/1491) and [PR #1496](https://github.com/microsoft/dicom-server/pull/1496). | +**Added back the core to resource path** -## March 2022 + Part of the path to a string resource was accidentally removed in the versioning policy. This fix adds it back in. For more information, see [PR #2470](https://github.com/microsoft/fhir-server/pull/2470). -### Azure Health Data Services -#### **Features** -|Feature |Related information | -| :- | : | -|Private Link |The Private Link feature is now available. With Private Link, you can access Azure Health Data Services securely from your VNet as a first-party service without having to go through a public Domain Name System (DNS). For more information, see [Configure Private Link for Azure Health Data Services](./../healthcare-apis/healthcare-apis-configure-private-link.md). | +#### DICOM service +++**Reduced the strictness of validation applied to incoming DICOM files** ++When value representation (VR) is a decimal string (DS)/ integer string (IS), `fo-dicom` serialization treats value as a number. Customer DICOM files could be old and contains invalid numbers. Our service blocks such file upload due to the serialization exception. For more information, see [PR #1450](https://github.com/microsoft/dicom-server/pull/1450). ++**Correctly parse a range of input in the content negotiation headers** ++Currently, WADO with Accept: multipart/related; type=application/dicom will throw an error. It will accept Accept: multipart/related; type="application/dicom", but they should be equivalent. For more information, see [PR #1462](https://github.com/microsoft/dicom-server/pull/1462). +++**Fixed an issue where parallel upload of images in a study could fail under certain circumstances** ++Handle race conditions during parallel instance inserts in the same study. For more information, see [PR #1491](https://github.com/microsoft/dicom-server/pull/1491) and [PR #1496](https://github.com/microsoft/dicom-server/pull/1496). ++## March 2022 ++#### Azure Health Data Services ++**Private Link is now available** +With Private Link, you can access Azure Health Data Services securely from your VNet as a first-party service without having to go through a public Domain Name System (DNS). For more information, see [Configure Private Link for Azure Health Data Services](./../healthcare-apis/healthcare-apis-configure-private-link.md). ### FHIR service -#### **Features** +**FHIRPath Patch operation available** +|This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). +++**SQL timeout that returns 408 status code** +Previously, a SQL timeout would return a 500. Now a timeout in SQL will return a FHIR OperationOutcome with a 408 status code. For more information, see [PR #2497](https://github.com/microsoft/fhir-server/pull/2497). ++**Fixed issue related to duplicate resources in search with `_include`** +Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). +++**Fixed issue PUT creates on versioned update** +Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). -|Feature | Related information | -| : | :- | -|FHIRPath Patch |This new feature enables you to use the FHIRPath Patch operation on FHIR resources. For more information, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](./../healthcare-apis/fhir/fhir-rest-api-capabilities.md). | +**Invalid header handling on versioned update** -#### **Bug fixes** + Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). -|Bug fixes |Related information | -| :-- | : | -|SQL timeout returns 408 |Previously, a SQL timeout would return a 500. Now a timeout in SQL will return a FHIR OperationOutcome with a 408 status code. For more information, see [PR #2497](https://github.com/microsoft/fhir-server/pull/2497). | -|Duplicate resources in search with `_include` |Fixed issue where a single resource can be returned twice in a search that has `_include`. For more information, see [PR #2448](https://github.com/microsoft/fhir-server/pull/2448). | -|PUT creates on versioned update |Fixed issue where creates with PUT resulted in an error when the versioning policy is configured to `versioned-update`. For more information, see [PR #2457](https://github.com/microsoft/fhir-server/pull/2457). | -|Invalid header handling on versioned update |Fixed issue where invalid `if-match` header would result in an HTTP 500 error. Now an HTTP Bad Request is returned instead. For more information, see [PR #2467](https://github.com/microsoft/fhir-server/pull/2467). | +#### MedTech service -### MedTech service +**The Events feature within Health Data Services is now generally available (GA).** -#### **Features and enhancements** + The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](./../healthcare-apis/events/events-message-structure.md) and [What are events?](./../healthcare-apis/events/events-overview.md). -|Enhancements | Related information | -| : | :- | -|Events |The Events feature within Health Data Services is now generally available (GA). The Events feature allows customers to receive notifications and triggers when FHIR observations are created, updated, or deleted. For more information, see [Events message structure](./../healthcare-apis/events/events-message-structure.md) and [What are events?](./../healthcare-apis/events/events-overview.md). | -|Events documentation for Azure Health Data Services |Updated docs to allow for better understanding, knowledge, and help for Events as it went GA. Updated troubleshooting for ease of use for the customer. | -|One touch deploy button for MedTech service launch in the portal |Enables easier deployment and use of MedTech service for customers without the need to go back and forth between pages or interfaces. | +**Events documentation for Azure Health Data Services** +Updated docs to allow for better understanding, knowledge, and help for Events as it went GA. Updated troubleshooting for ease of use for the customer. ++**One touch deploy button for MedTech service launch in the portal** +Enables easier deployment and use of MedTech service for customers without the need to go back and forth between pages or interfaces. ## January 2022 -#### **Features and enhancements** -|Enhancement |Related information | -| :- | : | -|Export FHIR data behind firewalls |This new feature enables exporting FHIR data to storage accounts behind firewalls. For more information, see [Configure export settings and set up a storage account](./././fhir/configure-export-data.md). | -|Deploy Azure Health Data Services with Azure Bicep |This new feature enables you to deploy Azure Health Data Services using Azure Bicep. For more information, see [Deploy Azure Health Data Services using Azure Bicep](deploy-healthcare-apis-using-bicep.md). | +**Export FHIR data behind firewalls** +This new feature enables exporting FHIR data to storage accounts behind firewalls. For more information, see [Configure export settings and set up a storage account](./././fhir/configure-export-data.md). -### DICOM service +**Deploy Azure Health Data Services with Azure Bicep** +This new feature enables you to deploy Azure Health Data Services using Azure Bicep. For more information, see [Deploy Azure Health Data Services using Azure Bicep](deploy-healthcare-apis-using-bicep.md). -#### **Feature enhancements** +#### DICOM service +++**Customers can define their own query tags using the Extended Query Tags feature** -|Enhancements | Related information | -| : | :- | -|Customers can define their own query tags using the Extended Query Tags feature |With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multi-tenancy and cohorts. It's available for all customers in Azure Health Data Services. | +With Extended Query Tags feature, customers now efficiently query non-DICOM metadata for capabilities like multi-tenancy and cohorts. It's available for all customers in Azure Health Data Services. ## December 2021 -### Azure Health Data Services +#### Azure Health Data Services -#### **Features and enhancements** -|Enhancements |Related information | -| :- | : | -|Quota details for support requests |We've updated the quota details for customer support requests with the latest information. | -|Local RBAC |We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. | -|Deploy and configure Azure Health Data Services using scripts |We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Scripts for deploying Azure Health Data Services will be available after GA. | +**Quota details for support requests** +We've updated the quota details for customer support requests with the latest information. -### FHIR service +**Local RBAC documentation updated ** ++We've updated the local RBAC documentation to clarify the use of the secondary tenant and the steps to disable it. ++**Deploy and configure Azure Health Data Services using scripts** ++We've started the process of providing PowerShell, CLI scripts, and ARM templates to configure app registration and role assignments. Scripts for deploying Azure Health Data Services will be available after GA. -#### **Features and enhancements** +#### FHIR service -|Enhancements | Related information | -| : | :- | -|Added Publisher to `CapabilityStatement.name` |You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) | -|Log `FhirOperation` linked to anonymous calls to Request metrics |We weren't logging operations that didn’t require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) | -#### **Bug fixes** +**Added Publisher to `CapabilityStatement.name`** -|Bug fixes |Related information | -| :-- | : | -|Fixed 500 error when `SearchParameter` Code is null |Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) | -|Returned `BadRequestException` with valid message when input JSON body is invalid |For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) | -|Handled SQL Timeout issue |If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) | +You can now find the publisher in the capability statement at `CapabilityStatement.name`. [#2319](https://github.com/microsoft/fhir-server/pull/2319) +++**Log `FhirOperation` linked to anonymous calls to Request metrics** ++ We weren't logging operations that didn’t require authentication. We extended the ability to get `FhirOperation` type in `RequestMetrics` for anonymous calls. [#2295](https://github.com/microsoft/fhir-server/pull/2295) ++**Fixed 500 error when `SearchParameter` Code is null** ++Fixed an issue with `SearchParameter` if it had a null value for Code, the result would be a 500. Now it will result in an `InvalidResourceException` like the other values do. [#2343](https://github.com/microsoft/fhir-server/pull/2343) + +**Returned `BadRequestException` with valid message when input JSON body is invalid** ++For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239) +++**Handled SQL Timeout issue** ++If SQL Server timed out, the PUT `/resource{id}` returned a 500 error. Now we handle the 500 error and return a timeout exception with an operation outcome. [#2290](https://github.com/microsoft/fhir-server/pull/2290) ## November 2021 -### FHIR service +#### FHIR service #### **Feature enhancements** -| Enhancements | Related information | -| :- | :-- | -|Process Patient-everything links |We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. | -|Added software name and version to capability statement. |In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) | -|Compress continuation tokens |In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). | -|FHIR service autoscale |The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all [regions](https://azure.microsoft.com/global-infrastructure/services/) where the FHIR service is supported. | +**Process Patient-everything links** -#### **Bug fixes** +We've expanded the Patient-everything capabilities to process patient links [#2305](https://github.com/microsoft/fhir-server/pull/2305). For more information, see [Patient-everything in FHIR](./../healthcare-apis/fhir/patient-everything.md#processing-patient-links) documentation. -|Bug fixes |Related information | -| :-- | : | -|Resolved 500 error when the date was passed with a time zone. |This fix addresses a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). | -|Resolved issue when posting a bundle with incorrect Media Type returned a 500 error. |Previously when posting a search with a key that contains certain characters, a 500 error is returned. This fixes issue [#2264](https://github.com/microsoft/fhir-server/pull/2264) and addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). | +**Added software name and version to capability statement.** +In the capability statement, the software name now distinguishes if you're using Azure API for FHIR or Azure Health Data Services. The software version will now specify which open-source [release package](https://github.com/microsoft/fhir-server/releases) is live in the managed service [#2294](https://github.com/microsoft/fhir-server/pull/2294). Addresses: [#1778](https://github.com/microsoft/fhir-server/issues/1778) and [#2241](https://github.com/microsoft/fhir-server/issues/2241) -### DICOM service -#### **Feature enhancements** +**Compress continuation tokens** ++In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250). ++**FHIR service autoscale** ++The [FHIR service autoscale](./fhir/fhir-service-autoscale.md) is designed to provide optimized service scalability automatically to meet customer demands when they perform data transactions in consistent or various workloads at any time. It's available in all [regions](https://azure.microsoft.com/global-infrastructure/services/) where the FHIR service is supported. + -|Enhancements | Related information | -| : | :- | -|Content-Type header now includes transfer-syntax. |This enhancement enables the user to know which transfer syntax is used in case multiple accept headers are being supplied. | +**Resolved 500 error when the date was passed with a time zone.** ++This fix addresses a 500 error when a date with a time zone was passed into a datetime field [#2270](https://github.com/microsoft/fhir-server/pull/2270). +++**Resolved issue when posting a bundle with incorrect Media Type returned a 500 error.** ++Previously when posting a search with a key that contains certain characters, a 500 error is returned. This fixes issue [#2264](https://github.com/microsoft/fhir-server/pull/2264) and addresses [#2148](https://github.com/microsoft/fhir-server/issues/2148). ++#### DICOM service ++**Content-Type header now includes transfer-syntax.** ++This enhancement enables the user to know which transfer syntax is used in case multiple accept headers are being supplied. ## October 2021 -### Azure Health Data Services +#### Azure Health Data Services -#### **Feature enhancements** +**Test Data Generator tool** -| Enhancements | Related information | -| :- | :-- | -|Test Data Generator tool |We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter that can be deployed to Azure AKS for performance tests. | +We've updated Azure Health Data Services GitHub samples repo to include a [Test Data Generator tool](https://github.com/microsoft/healthcare-apis-samples/blob/main/docs/HowToRunPerformanceTest.md) using Synthea data. This tool is an improvement to the open source [public test projects](https://github.com/ShadowPic/PublicTestProjects), based on Apache JMeter that can be deployed to Azure AKS for performance tests. -### FHIR service +#### FHIR service ++++**Added support for [_sort](././../healthcare-apis/fhir/overview-of-search.md#search-result-parameters) on strings and dateTime.** +[#2169](https://github.com/microsoft/fhir-server/pull/2169) +++**Fixed issue where [Conditional Delete](././../healthcare-apis/fhir/fhir-rest-api-capabilities.md#conditional-delete) could result in an infinite loop.**[#2269](https://github.com/microsoft/fhir-server/pull/2269) -#### **Feature enhancements** -|Enhancements | Related information | -| : | :- | -|Added support for [_sort](././../healthcare-apis/fhir/overview-of-search.md#search-result-parameters) on strings and dateTime. |[#2169](https://github.com/microsoft/fhir-server/pull/2169) | +**Resolved 500 error possibly caused by a malformed transaction body in a bundle POST.** We've added a check that the URL is populated in the [transaction bundle](././..//healthcare-apis/fhir/fhir-features-supported.md#rest-api) requests.**[#2255](https://github.com/microsoft/fhir-server/pull/2255) -#### **Bug fixes** +#### **DICOM service** -|Bug fixes | Related information | -| : | :- | -|Fixed issue where [Conditional Delete](././../healthcare-apis/fhir/fhir-rest-api-capabilities.md#conditional-delete) could result in an infinite loop. | [#2269](https://github.com/microsoft/fhir-server/pull/2269) | -|Resolved 500 error possibly caused by a malformed transaction body in a bundle POST. We've added a check that the URL is populated in the [transaction bundle](././..//healthcare-apis/fhir/fhir-features-supported.md#rest-api) requests. | [#2255](https://github.com/microsoft/fhir-server/pull/2255) | -### **DICOM service** +**Regions** ++**South Brazil and Central Canada.** For more information about Azure regions and availability zones, see [Azure services that support availability zones](https://azure.microsoft.com/global-infrastructure/services/). +++**Extended Query tags** +DateTime (DT) and Time (TM) Value Representation (VR) types +++**Implemented fix to workspace names.** +Enabled DICOM service to work with workspaces that have names beginning with a letter. -|Added support | Related information | -| : | :- | -|Regions | South Brazil and Central Canada. For more information about Azure regions and availability zones, see [Azure services that support availability zones](https://azure.microsoft.com/global-infrastructure/services/). | -|Extended Query tags |DateTime (DT) and Time (TM) Value Representation (VR) types | -|Bug fixes | Related information | -| : | :- | -|Implemented fix to workspace names. |Enabled DICOM service to work with workspaces that have names beginning with a letter. | ## September 2021 -### FHIR service +#### FHIR service -#### **Feature enhancements** -|Enhancements | Related information | +|Enhancements | | | :- | :- | |Added support for conditional patch | [Conditional patch](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#patch-and-conditional-patch)| |Conditional patch | [#2163](https://github.com/microsoft/fhir-server/pull/2163) | |Added conditional patch audit event. | [#2213](https://github.com/microsoft/fhir-server/pull/2213) | -|Allow JSON patch in bundles | [JSON patch in bundles](./././azure-api-for-fhir/fhir-rest-api-capabilities.md#json-patch-in-bundles)| +|Allow JSON patch in bundles | | | :- | :-| |Allows for search history bundles with Patch requests. |[#2156](https://github.com/microsoft/fhir-server/pull/2156) | |Enabled JSON patch in bundles using Binary resources. |[#2143](https://github.com/microsoft/fhir-server/pull/2143) | |Added new audit event [OperationName subtypes](./././azure-api-for-fhir/enable-diagnostic-logging.md#audit-log-details)| [#2170](https://github.com/microsoft/fhir-server/pull/2170) | -| Running a reindex job | [Re-index improvements](./././fhir/how-to-run-a-reindex.md)| +| Running a reindex job | | | :- | :-| |Added [boundaries for reindex](./././azure-api-for-fhir/how-to-run-a-reindex.md#performance-considerations) parameters. |[#2103](https://github.com/microsoft/fhir-server/pull/2103)| |Updated error message for reindex parameter boundaries. |[#2109](https://github.com/microsoft/fhir-server/pull/2109)| |Added final reindex count check. |[#2099](https://github.com/microsoft/fhir-server/pull/2099)| -#### **Bug fixes** --|Bug fixes | Related information | +|Bug fixes | | | :- | :-- | | Wider catch for exceptions during applying patch | [#2192](https://github.com/microsoft/fhir-server/pull/2192)| |Fix history with PATCH in STU3 |[#2177](https://github.com/microsoft/fhir-server/pull/2177) | -|Custom search bugs | Related information | +|Custom search bugs | | | :- | :- | |Addresses the delete failure with Custom Search parameters |[#2133](https://github.com/microsoft/fhir-server/pull/2133) | |Added retry logic while Deleting Search parameter | [#2121](https://github.com/microsoft/fhir-server/pull/2121)| |Set max item count in search options in SearchParameterDefinitionManager |[#2141](https://github.com/microsoft/fhir-server/pull/2141) | |Better exception if there's a bad expression in a search parameter |[#2157](https://github.com/microsoft/fhir-server/pull/2157) | -|Resolved SQL batch reindex if one resource fails | Related information | +|Resolved SQL batch reindex if one resource fails | | | :- | :- | |Updates SQL batch reindex retry logic |[#2118](https://github.com/microsoft/fhir-server/pull/2118) | -|GitHub issues closed | Related information | +|GitHub issues closed | | | :- | :- | |Unclear error message for conditional create with no ID |[#2168](https://github.com/microsoft/fhir-server/issues/2168) | -### **DICOM service** +#### **DICOM service** -#### **Bug fixes** +**Implemented fix to resolve QIDO paging-ordering issues** | [#989](https://github.com/microsoft/dicom-server/pull/989) | -|Bug fixes | Related information | -| :- | :- | -|Implemented fix to resolve QIDO paging-ordering issues | [#989](https://github.com/microsoft/dicom-server/pull/989) | -| :- | :- | +#### **MedTech service** +++**MedTech service normalized improvements with calculations to support and enhance health data standardization.** ++See [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Content Templates](./../healthcare-apis/iot/how-to-use-calculatedcontenttemplate-mappings.md) -### **MedTech service** -#### **Bug fixes** -|Bug fixes | Related information | -| :- | :- | -| MedTech service normalized improvements with calculations to support and enhance health data standardization. | See [Use Device mappings](./../healthcare-apis/iot/how-to-use-device-mappings.md) and [Calculated Content Templates](./../healthcare-apis/iot/how-to-use-calculatedcontenttemplate-mappings.md) | ## Next steps + In this article, you learned about the features and enhancements made to Azure Health Data Services. For more information about the known issues with Azure Health Data Services, see >[!div class="nextstepaction"] |
iot-central | Concepts Device Implementation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md | The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and P ### Device model -A device model is defined by using the [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) modeling language. This language lets you define: +A device model is defined by using the [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) modeling language. This language lets you define: - The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double. - The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean. |
iot-central | Concepts Device Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md | To learn more about editing a device model, see [Edit an existing device templat A solution developer can also export a JSON file from the device template that contains a complete device model or individual interface. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application. -The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). +The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). A typical IoT device is made up of: |
iot-central | Concepts Faq Apaas Paas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md | After the migration, devices aren't automatically deleted from the IoT Central a So that you can seamlessly migrate devices from your IoT Central applications to PaaS solution, follow these guidelines: -- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central.+- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central. - The device must follow the [IoT Central data formats for telemetry, property, and commands](concepts-telemetry-properties-commands.md). |
iot-central | Concepts Telemetry Properties Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md | Each example shows a snippet from the device model that defines the type and exa > [!NOTE] > IoT Central accepts any valid JSON but it can only be used for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). -The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). +The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). For sample device code that shows some of these payloads in use, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial. IoT Central lets you view the raw data that a device sends to an application. Th ## Telemetry -To learn more about the DTDL telemetry naming rules, see [DTDL > Telemetry](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#telemetry). You can't start a telemetry name using the `_` character. +To learn more about the DTDL telemetry naming rules, see [DTDL > Telemetry](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#telemetry). You can't start a telemetry name using the `_` character. Don't create telemetry types with the following names. IoT Central uses these reserved names internally. If you try to use these names, IoT Central will ignore your data: The following snippet from a device model shows the definition of a `geopoint` t ``` > [!NOTE]-> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. +> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. A device client should send the telemetry as JSON that looks like the following example. IoT Central displays the value as a pin on a map: A device client should send the state as JSON that looks like the following exam ## Properties -To learn more about the DTDL property naming rules, see [DTDL > Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). You can't start a property name using the `_` character. +To learn more about the DTDL property naming rules, see [DTDL > Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). You can't start a property name using the `_` character. > [!NOTE] > The payload formats for properties applies to applications created on or after 07/14/2020. The following snippet from a device model shows the definition of a `geopoint` p ``` > [!NOTE]-> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. +> The **geopoint** schema type is not part of the [Digital Twins Definition Language specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. A device client should send a JSON payload that looks like the following example as a reported property in the device twin: The device should send the following JSON payload to IoT Central after it proces ## Commands -To learn more about the DTDL command naming rules, see [DTDL > Command](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#command). You can't start a command name using the `_` character. +To learn more about the DTDL command naming rules, see [DTDL > Command](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#command). You can't start a command name using the `_` character. If the command is defined in a component, the name of the command the device receives includes the component name. For example, if the command is called `getMaxMinReport` and the component is called `thermostat2`, the device receives a request to execute a command called `thermostat2*getMaxMinReport`. |
iot-central | Howto Configure Rules Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules-advanced.md | The Azure IoT Central V3 connector for Power Automate and Azure Logic Apps lets - When a rule fires in your Azure IoT Central app, it can trigger a workflow in Power Automate or Azure Logic Apps. These workflows can run actions in other cloud services, such as Microsoft 365 or a third-party service. - An event in another cloud service, such as Microsoft 365, can trigger a workflow in Power Automate or Azure Logic Apps. These workflows can run actions or retrieve data from your IoT Central application.-- Azure IoT Central V3 connector aligns with the generally available [1.0 REST API](/rest/api/iotcentral/) surface. All of the connector actions support the [DTDLv2 format](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) and support for DTDLv1 based models is being deprecated. For the latest information and details of recent updates, see the [Release notes](/connectors/azureiotcentral/#release-notes) for the current connector version.+- Azure IoT Central V3 connector aligns with the generally available [1.0 REST API](/rest/api/iotcentral/) surface. All of the connector actions support the [DTDLv2 format](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) and support for DTDLv1 based models is being deprecated. For the latest information and details of recent updates, see the [Release notes](/connectors/azureiotcentral/#release-notes) for the current connector version. ## Prerequisites |
iot-central | Howto Manage Dashboards With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards-with-rest-api.md | Use the following request to create a dashboard. PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview ``` -`dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) identifier for the dashboard. +`dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#digital-twin-model-identifier) identifier for the dashboard. The request body has some required fields: |
iot-central | Howto Manage Device Templates With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md | To learn how to manage device templates by using the IoT Central UI, see [How to A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views. -The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). +The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). ## Device templates REST API PUT https://{your app subdomain}/api/deviceTemplates/{deviceTemplateId}?api-vers ``` >[!NOTE]->Device template IDs follow the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#digital-twin-model-identifier) naming convention, for example: `dtmi:contoso:mythermostattemplate;1` +>Device template IDs follow the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#digital-twin-model-identifier) naming convention, for example: `dtmi:contoso:mythermostattemplate;1` The following example shows a request body that adds a device template for a thermostat device. The `capabilityModel` includes temperature telemetry, two properties, and a command. The device template defines the `CustomerName` cloud property and customizes the `targetTemperature` property with `decimalPlaces`, `displayUnit`, `maxValue`, and `minValue`. The value of the device template `@id` must match the `deviceTemplateId` value in the URL. The value of the device template `@id` isn't the same as the value of the `capabilityModel` `@id` value. |
iot-central | Howto Set Up Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md | You have several options to create device templates: - Import a device template from the [Azure Certified for IoT device catalog](https://aka.ms/iotdevcat). Optionally, customize the device template to your requirements in IoT Central. - When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties and views your IoT Central application needs to the device template. - When the device connects to IoT Central, let IoT Central [autogenerate a device template](#autogenerate-a-device-template) definition from the data the device sends.-- Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs.+- Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs. - You can also add device templates to an IoT Central application using the [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md) or the [CLI](howto-manage-iot-central-from-cli.md). > [!NOTE] |
iot-central | Howto Use Location Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md | The following screenshot shows a device template with examples of a device prope :::image type="content" source="media/howto-use-location-data/location-device-template.png" alt-text="Screenshot showing location property definition in device template" lightbox="media/howto-use-location-data/location-device-template.png"::: -For reference, the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) definitions for these capabilities look like the following snippet: +For reference, the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) definitions for these capabilities look like the following snippet: ```json { For reference, the [Digital Twins Definition Language (DTDL) V2](https://github. ``` > [!NOTE]-> The **geopoint** schema type is not part of the [DTDL specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. +> The **geopoint** schema type is not part of the [DTDL specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility. ## Send location data from a device |
iot-develop | Concepts Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-architecture.md | The following diagram shows the key elements of an IoT Plug and Play solution: ## Model repository -The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). +The [model repository](./concepts-model-repository.md) is a store for model and interface definitions. You define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). The web UI lets you manage the models and interfaces. |
iot-develop | Concepts Convention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md | IoT Plug and Play devices should follow a set of conventions when they exchange A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime. -You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) _model_. There are two types of model referred to in this article: +You describe the telemetry, properties, and commands that an IoT Plug and Play device implements with a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) _model_. There are two types of model referred to in this article: - **No component** - A model with no components. The model declares telemetry, properties, and commands as top-level elements in the contents section of the main interface. In the Azure IoT explorer tool, this model appears as a single _default component_. - **Multiple components** - A model composed of two or more interfaces. A main interface, which appears as the _default component_, with telemetry, properties, and commands. One or more interfaces declared as components with more telemetry, properties, and commands. On a device or module, multiple component interfaces use command names with the Now that you've learned about IoT Plug and Play conventions, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot-develop | Concepts Developer Guide Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md | This guide describes the basic steps required to create a device, module, or IoT To build an IoT Plug and Play device, module, or IoT Edge module, follow these steps: 1. Ensure your device is using either the MQTT or MQTT over WebSockets protocol to connect to Azure IoT Hub.-1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). +1. Create a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model to describe your device. To learn more, see [Understand components in IoT Plug and Play models](concepts-modeling-guide.md). 1. Update your device or module to announce the `model-id` as part of the device connection. 1. Implement telemetry, properties, and commands that follow the [IoT Plug and Play conventions](concepts-convention.md) Once your device or module implementation is ready, use the [Azure IoT explorer] Now that you've learned about IoT Plug and Play device development, here are some other resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [Understand components in IoT Plug and Play models](concepts-modeling-guide.md) |
iot-develop | Concepts Developer Guide Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md | The service SDKs let you access device information from a solution component suc Now that you've learned about device modeling, here are some more resources: -- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md)+- [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device) - [IoT Plug and Play modeling guide](concepts-modeling-guide.md) |
iot-develop | Concepts Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-digital-twin.md | -An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have. +An IoT Plug and Play device implements a model described by the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) schema. A model describes the set of components, properties, commands, and telemetry messages that a particular device can have. -IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) specification on GitHub. +IoT Plug and Play uses DTDL version 2. For more information about this version, see the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) specification on GitHub. > [!NOTE] > DTDL isn't exclusive to IoT Plug and Play. Other IoT services, such as [Azure Digital Twins](../digital-twins/overview.md), use it to represent entire environments such as buildings and energy networks. |
iot-develop | Concepts Model Parser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-parser.md | -The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. +The Digital Twins Definition Language (DTDL) is described in the [DTDL Specification V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). Users can use the _Digital Twins Model Parser_ NuGet package to validate and query a DTDL model. The DTDL model may be defined in multiple files. ## Install the DTDL model parser |
iot-develop | Concepts Model Repository | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-model-repository.md | -The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). +The device models repository (DMR) enables device builders to manage and share IoT Plug and Play device models. The device models are JSON LD documents defined using the [Digital Twins Modeling Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). The DMR defines a pattern to store DTDL interfaces in a folder structure based on the device twin model identifier (DTMI). You can locate an interface in the DMR by converting the DTMI to a relative path. For example, the `dtmi:com:example:Thermostat;1` DTMI translates to `/dtmi/com/example/thermostat-1.json` and can be obtained from the public base URL `devicemodels.azure.com` at the URL [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.json). |
iot-develop | Howto Convert To Pnp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-convert-to-pnp.md | In summary, the sample implements the following capabilities: ## Design a model -Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities. +Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) to describe the device capabilities. For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements. |
iot-develop | Howto Manage Digital Twin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/howto-manage-digital-twin.md | A name can be 1-64 characters long. **Property value** -The value must be a valid [DTDL V2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#property). +The value must be a valid [DTDL V2 Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). -All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL V2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md#schemas). +All primitive types are supported. Within complex types, enums, maps, and objects are supported. To learn more, see [DTDL V2 Schemas](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#schema). Properties don't support array or any complex schema with an array. |
iot-develop | Overview Iot Plug And Play | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/overview-iot-plug-and-play.md | IoT Plug and Play enables solution builders to integrate IoT devices with their You can group these elements in interfaces to reuse across models to make collaboration easier and to speed up development. -To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling. +To make IoT Plug and Play work with [Azure Digital Twins](../digital-twins/overview.md), you define models and interfaces using the [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and industry. Both are based on open W3C standards such as JSON-LD and RDF, which enables easier adoption across services and tooling. There's no extra cost for using IoT Plug and Play and DTDL. Standard rates for [Azure IoT Hub](../iot-hub/about-iot-hub.md) and other Azure services remain the same. |
iot-dps | Tutorial Custom Hsm Enrollment Group X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md | To add the root CA certificate to your DPS instance, follow these steps: :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/verify-root-certificate.png" alt-text="Screenshot that shows the verified root C A certificate in the list of certificates."::: +## (Optional) Manual verification of root certificate +If you didn't choose to automatically verify the certificate during upload, you manually prove possession: ++1. Select the new CA certificate. ++1. Select Generate Verification Code in the Certificate Details dialog. ++1. Create a certificate that contains the verification code. For example, if you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named `verification-code.cert.pem`, replacing `<verification code>` with the previously generated verification code. For more information, you can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/main/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate. + +1. Upload `verification-code.cert.pem` to your provisioning service in the Certificate Details dialog. ++1. Select Verify. + ## Update the certificate store on Windows-based devices On non-Windows devices, you can pass the certificate chain from the code as the certificate store. |
iot-hub | Iot Hub Devguide Messages Construct | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md | The **iothub-connection-auth-method** property contains a JSON serialized object ```json {- "scope": "{ hub | device }", + "scope": "{ hub | device | module }", "type": "{ symkey | sas | x509 }", "issuer": "iothub" } The **iothub-connection-auth-method** property contains a JSON serialized object ## Next steps * For information about message size limits in IoT Hub, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).- * To learn how to create and read IoT Hub messages in various programming languages, see the [Quickstarts](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs).- * To learn about the structure of non-telemetry events generated by IoT Hub, see [IoT Hub non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md). |
iot-hub | Iot Hub X509 Certificate Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509-certificate-concepts.md | Digital signing can be used to determine whether the data has been modified in t ## Next steps -To learn more about the fields that make up an X.509 certificate, see [Understand X.509 public key certificates](tutorial-x509-certificates.md). +To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md). If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles: |
iot-hub | Reference X509 Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/reference-x509-certificates.md | + + Title: X.509 certificates | Microsoft Docs +description: Reference documentation containing information about X.509 certificates, including certificate fields, certificate extensions, and certificate formats. +++++ Last updated : 02/03/2022+++#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub, and I need to know what file formats, fields, and other details are supported by Azure IoT Hub. +++# X.509 certificates ++X.509 certificates are digital documents that represent a user, computer, service, or device. They're issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They don't contain the subject's private key, which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They're digitally signed and, in general, contain the following information: ++* Information about the certificate subject +* The public key that corresponds to the subject's private key +* Information about the issuing CA +* The supported encryption and/or digital signing algorithms +* Information to determine the revocation and validity status of the certificate ++## Certificate fields ++There are three incremental versions of the X.509 certificate standard, and each subsequent version added certificate fields to the standard: ++* Version 1 (v1), published in 1988, follows the initial X.509 standard for certificates. +* Version 2 (v2), published in 1993, adds two fields to the fields included in Version 1. +* Version 3 (v3), published in 2008, represents the current version of the X.509 standard. This version adds support for certificate extensions. ++This section is meant as a general reference for the certificate fields and certificate extensions available in X.509 certificates. For more information about certificate fields and certificate extensions, including data types, constraints, and other details, see the [RFC 5280](https://tools.ietf.org/html/rfc5280) specification. ++### Version 1 fields ++The following table describes Version 1 certificate fields for X.509 certificates. All of the fields included in this table are available in subsequent X.509 certificate versions. ++| Name | Description | +| | | +| [Version](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.1) | An integer that identifies the version number of the certificate.| +| [Serial Number](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.2) | An integer that represents the unique number for each certificate issued by a certificate authority (CA). | +| [Signature](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.3) | The identifier for the cryptographic algorithm used by the CA to sign the certificate. The value includes both the identifier of the algorithm and any optional parameters used by that algorithm, if applicable. | +| [Issuer](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.4) | The distinguished name (DN) of the certificate's issuing CA. | +| [Validity](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.5) | The inclusive time period for which the certificate is considered valid. | +| [Subject](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.6) | The distinguished name (DN) of the certificate subject. | +| [Subject Public Key Info](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.7) | The public key owned by the certificate subject. | ++### Version 2 fields ++The following table describes the fields added for Version 2, containing information about the certificate issuer. These fields are, however, rarely used. All of the fields included in this table are available in subsequent X.509 certificate versions. ++| Name | Description | +| | | +| [Issuer Unique ID](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.8) | A unique identifier that represents the issuing CA, as defined by the issuing CA. | +| [Subject Unique ID](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.8) | A unique identifier that represents the certificate subject, as defined by the issuing CA. | ++### Version 3 fields ++The following table describes the field added for Version 3, representing a collection of X.509 certificate extensions. ++| Name | Description | +| | | +| [Extensions](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.9) | A collection of standard and Internet-specific certificate extensions. For more information about the certificate extensions available to X.509 v3 certificates, see [Certificate extensions](#certificate-extensions). | ++## Certificate extensions ++Certificate extensions, introduced with Version 3, provide methods for associating more attributes with users or public keys and for managing relationships between certificate authorities. For more information about certificate extensions, see the [Certificate Extensions](https://www.rfc-editor.org/rfc/rfc5280#section-4.2) section of the [RFC 5280](https://tools.ietf.org/html/rfc5280) specification. ++### Standard extensions ++The extensions included in this section are defined as part of the X.509 standard, for use in the Internet public key infrastructure (PKI). ++| Name | Description | +| | | +| [Authority Key Identifier](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.1) | An identifier that represents either the certificate subject and the serial number of the CA certificate that issued this certificate, or a hash of the public key of the issuing CA. | +| [Subject Key Identifier](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.2) | A hash of the current certificate's public key. | +| [Key Usage](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.3) | A bitmapped value that defines the services for which a certificate can be used. | +| [Private Key Usage Period](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.3) | The validity period for the private key portion of a key pair. | +| [Certificate Policies](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.4) | A collection of policy information, used to validate the certificate subject. | +| [Policy Mappings](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.5) | A collection of policy mappings, each of which maps a policy in one organization to policy in another organization. | +| [Subject Alternative Name](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.6) | A collection of alternate names for the subject. | +| [Issuer Alternative Name](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.7) | A collection of alternate names for the issuing CA. | +| [Subject Directory Attributes](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.8) | A collection of attributes from an X.500 or LDAP directory. | +| [Basic Constraints](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.9) | A collection of constraints that allow the certificate to designate whether it's issued to a CA, or to a user, computer, device, or service. This extension also includes a path length constraint that limits the number of subordinate CAs that can exist. | +| [Name Constraints](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.10) | A collection of constraints that designate which namespaces are allowed in a CA-issued certificate. | +| [Policy Constraints](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.11) | A collection of constraints that can be used to prohibit policy mappings between CAs. | +| [Extended Key Usage](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.12) | A collection of key purpose values that indicate how a certificate's public key can be used, beyond the purposes identified in the **Key Usage** extension. | +| [CRL Distribution Points](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.13) | A collection of URLs where the base certificate revocation list (CRL) is published. | +| [Inhibit anyPolicy](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.14) | Inhibits the use of the **All Issuance Policies** OID (2.5.29.32.0) in subordinate CA certificates +| [Freshest CRL](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.1.15) | This extension, also known as the **Delta CRL Distribution Point**, contains one or more URLs where the issuing CA's delta CRL is published. | ++### Private Internet extensions ++The extensions included in this section are similar to standard extensions, and may be used to direct applications to online information about the issuing CA or certificate subject. ++| Name | Description | +| | | +| [Authority Information Access](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.2.1) | A collection of entries that describe the format and location of additional information provided by the issuing CA. | +| [Subject Information Access](https://www.rfc-editor.org/rfc/rfc5280#section-4.2.2.2) | A collection of entries that describe the format and location of additional information provided by the certificate subject. | ++## Certificate formats ++Certificates can be saved in various formats. Azure IoT Hub authentication typically uses the Privacy-Enhanced Mail (PEM) and Personal Information Exchange (PFX) formats. The following table describes commonly used files and formats used to represent certificates. ++| Format | Description | +| | | +| Binary certificate | A raw form binary certificate using Distinguished Encoding Rules (DER) ASN.1 encoding. | +| ASCII PEM format | A PEM certificate (.pem) file contains a Base64-encoded certificate beginning with `--BEGIN CERTIFICATE--` and ending with `--END CERTIFICATE--`. One of the most common formats for X.509 certificates, PEM format is required by IoT Hub when uploading certain certificates, such as device certificates. | +| ASCII PEM key | Contains a Base64-encoded DER key, optionally with more metadata about the algorithm used for password protection. | +| PKCS #7 certificate | A format designed for the transport of signed or encrypted data. It can include the entire certificate chain. It's defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). | +| PKCS #8 key | The format for a private key store. It's defined by [RFC 5208](https://tools.ietf.org/html/rfc5208). | +| PKCS #12 key and certificate | A complex format that can store and protect a key and the entire certificate chain. It's commonly used with a .p12 or .pfx extension. PKCS #12 is synonymous with the PFX format. It's defined by [RFC 7292](https://tools.ietf.org/html/rfc7292). | ++## For more information ++For more information about X.509 certificates and how they're used in IoT Hub, see the following articles: ++* [The laymanΓÇÖs guide to X.509 certificate jargon](https://techcommunity.microsoft.com/t5/internet-of-things/the-layman-s-guide-to-x-509-certificate-jargon/ba-p/2203540) +* [Understand how X.509 CA certificates are used in IoT](./iot-hub-x509ca-concept.md) |
iot-hub | Tutorial X509 Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-certificates.md | - Title: Understand X.509 public key certificates for Azure IoT Hub| Microsoft Docs -description: Understand X.509 public key certificates for Azure IoT Hub ----- Previously updated : 12/30/2022---#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. ---# Understand X.509 public key certificates --X.509 certificates are digital documents that represent a user, computer, service, or device. They're issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They don't contain the subject's private key, which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They're digitally signed and, in general, contain the following information: --* Information about the certificate subject -* The public key that corresponds to the subject's private key -* Information about the issuing CA -* The supported encryption and/or digital signing algorithms -* Information to determine the revocation and validity status of the certificate --## Certificate fields --Over time there have been three certificate versions. Each version adds fields to the one before. Version 3 is current and contains version 1 and version 2 fields in addition to version 3 fields. Version 1 defined the following fields: --* **Version**: A value (1, 2, or 3) that identifies the version number of the certificate -* **Serial Number**: A unique number for each certificate issued by a CA -* **CA Signature Algorithm**: Name of the algorithm the CA uses to sign the certificate contents -* **Issuer Name**: The distinguished name (DN) of the certificate's issuing CA -* **Validity Period**: The time period for which the certificate is considered valid -* **Subject Name**: Name of the entity represented by the certificate -* **Subject Public Key Info**: Public key owned by the certificate subject --Version 2 added the following fields containing information about the certificate issuer. These fields are, however, rarely used. --* **Issuer Unique ID**: A unique identifier for the issuing CA as defined by the CA -* **Subject Unique ID**: A unique identifier for the certificate subject as defined by the issuing CA --Version 3 certificates added the following extensions: --* **Authority Key Identifier**: This extension can be set to one of two values: - * The subject of the CA and serial number of the CA certificate that issued this certificate - * A hash of the public key of the CA that issued this certificate -* **Subject Key Identifier**: Hash of the current certificate's public key -* **Key Usage** Defines the service for which a certificate can be used. This extension can be set to one or more of the following values: - * **Digital Signature** - * **Non-Repudiation** - * **Key Encipherment** - * **Data Encipherment** - * **Key Agreement** - * **Key Cert Sign** - * **CRL Sign** - * **Encipher Only** - * **Decipher Only** -* **Private Key Usage Period**: Validity period for the private key portion of a key pair -* **Certificate Policies**: Policies used to validate the certificate subject -* **Policy Mappings**: Maps a policy in one organization to policy in another -* **Subject Alternative Name**: List of alternate names for the subject -* **Issuer Alternative Name**: List of alternate names for the issuing CA -* **Subject Dir Attribute**: Attributes from an X.500 or LDAP directory -* **Basic Constraints**: Allows the certificate to designate whether it's issued to a CA, or to a user, computer, device, or service. This extension also includes a path length constraint that limits the number of subordinate CAs that can exist. -* **Name Constraints**: Designates which namespaces are allowed in a CA-issued certificate -* **Policy Constraints**: Can be used to prohibit policy mappings between CAs -* **Extended Key Usage**: Indicates how a certificate's public key can be used beyond the purposes identified in the **Key Usage** extension -* **CRL Distribution Points**: Contains one or more URLs where the base certificate revocation list (CRL) is published -* **Inhibit anyPolicy**: Inhibits the use of the **All Issuance Policies** OID (2.5.29.32.0) in subordinate CA certificates -* **Freshest CRL**: Contains one or more URLs where the issuing CA's delta CRL is published -* **Authority Information Access**: Contains one or more URLs where the issuing CA certificate is published -* **Subject Information Access**: Contains information about how to retrieve more details for a certificate subject --## Certificate formats --Certificates can be saved in various formats. Azure IoT Hub authentication typically uses the Privacy-Enhanced Mail (PEM) and Personal Information Exchange (PFX) formats. --### Binary certificate --A raw form binary certificate using Distinguished Encoding Rules (DER) ASN.1 encoding. --### ASCII PEM format --A PEM certificate (.pem) file contains a Base64-encoded certificate beginning with `--BEGIN CERTIFICATE--` and ending with `--END CERTIFICATE--`. One of the most common formats for X.509 certificates, PEM format is required by IoT Hub when uploading certain certificates. --### ASCII PEM key --Contains a Base64-encoded DER key, optionally with more metadata about the algorithm used for password protection. --### PKCS #7 certificate --A format designed for the transport of signed or encrypted data. It's defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). It can include the entire certificate chain. --### PKCS #8 key --The format for a private key store defined by [RFC 5208](https://tools.ietf.org/html/rfc5208). --### PKCS #12 key and certificate --A complex format that can store and protect a key and the entire certificate chain. It's commonly used with a .pfx extension. PKCS #12 is synonymous with the PFX format. --## For more information --For more information, see the following articles: --* [The laymanΓÇÖs guide to X.509 certificate jargon](https://techcommunity.microsoft.com/t5/internet-of-things/the-layman-s-guide-to-x-509-certificate-jargon/ba-p/2203540) -* [Understand how X.509 CA certificates are used in IoT](./iot-hub-x509ca-concept.md) --## Next steps --If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles: --* [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md) -* [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md) -* [Tutorial: Use OpenSSL to create self-signed certificates](tutorial-x509-self-sign.md) --If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md). |
iot-hub | Tutorial X509 Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-introduction.md | Before starting any of the articles in this tutorial, you should be familiar wit - For an introduction to concepts that underlie the use of X.509 certificates, see [Understand public key cryptography and X.509 public key infrastructure](iot-hub-x509-certificate-concepts.md). -- For a quick review of the fields that can be present in an X.509 certificate, see the [Certificate fields](tutorial-x509-certificates.md#certificate-fields) section of [Understand X.509 public key certificates](tutorial-x509-certificates.md).+- For a quick review of the fields that can be present in an X.509 certificate, see the [Certificate fields](reference-x509-certificates.md#certificate-fields) section of [Understand X.509 public key certificates](reference-x509-certificates.md). ## X.509 certificate scenario paths Using a CA-signed certificate chain backed by a PKI to authenticate a device pro ## Next steps -To learn more about the fields that make up an X.509 certificate, see [Understand X.509 public key certificates](tutorial-x509-certificates.md). +To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md). If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles: |
load-balancer | Troubleshoot Rhc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-rhc.md | The below table describes the RHC logic used to determine the health state of yo | Resource health status | Description | | | | | Available | Your standard load balancer resource is healthy and available. |-| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The Datapath Availability metric has reported less than 90% but greater than 25% health for at least two minutes. You will experience moderate to severe performance impact. -| Unavailable | Your standard load balancer resource is not healthy. The Datapath Availability metric has reported less the 25% health for at least two minutes. You will experience significant performance impact or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. | -| Unknown | Resource health status for your standard load balancer resource has not been updated yet or has not received Data Path availability information for the last 10 minutes. This state should be transient and will reflect correct status as soon as data is received. | +| Degraded | Your standard load balancer has platform or user initiated events impacting performance. The Datapath Availability metric has reported less than 90% but greater than 25% health for at least two minutes. You'll experience moderate to severe performance impact. +| Unavailable | Your standard load balancer resource isn't healthy. The Datapath Availability metric has reported less the 25% health for at least two minutes. You'll experience significant performance impact or lack of availability for inbound connectivity. There may be user or platform events causing unavailability. | +| Unknown | Resource health status for your standard load balancer resource hasn't been updated yet or hasn't received Data Path availability information for the last 10 minutes. This state should be transient and will reflect correct status as soon as data is received. | ## About the metrics we'll use-The two metrics to be used are *Data path availability* and *Health probe status* and it is important to understand their meaning to derive correct insights. +The two metrics to be used are *Data path availability* and *Health probe status* and it's important to understand their meaning to derive correct insights. ## Data path availability-The data path availability metric is generated by a TCP ping every 25 seconds on all frontend ports that have load-balancing and inbound NAT rules configured. This TCP ping will then be routed to any of the healthy (probed up) backend instances. If the service receives a response to the ping, it is considered a success and the sum of the metric will be iterated once, if it fails it won't. The count of this metric is 1/100 of the total TCP pings per sample period. Thus, we want to consider the average, which will show the average of sum/count for the time period. The data path availability metric aggregated by average thus gives us a percentage success rate for TCP pings on your frontend IP:port for each of your load-balancing and inbound NAT rules. +The data path availability metric is generated by a TCP ping every 25 seconds on all frontend ports that have load-balancing and inbound NAT rules configured. This TCP ping will then be routed to any of the healthy (probed up) backend instances. If the service receives a response to the ping, it's considered a success and the sum of the metric will be iterated once, if it fails it won't. The count of this metric is 1/100 of the total TCP pings per sample period. Thus, we want to consider the average, which will show the average of sum/count for the time period. The data path availability metric aggregated by average thus gives us a percentage success rate for TCP pings on your frontend IP:port for each of your load-balancing and inbound NAT rules. ## Health probe status-The health probe status metric is generated by a ping of the protocol defined in the health probe. This ping is sent to each instance in the backend pool and on the port defined in the health probe. For HTTP and HTTPS probes, a successful ping requires an HTTP 200 OK response whereas with TCP probes any response is considered successful. The consecutive successes or failures of each probe then determines whether a backend instance is healthy and able to receive traffic for the load-balancing rules to which the backend pool is assigned. Similar to data path availability we use the average aggregation, which tells us the average successful/total pings during the sampling interval. This health probe status value indicates the backend health in isolation from your load balancer by probing your backend instances without sending traffic through the frontend. +The health probe status metric is generated by a ping of the protocol defined in the health probe. This ping is sent to each instance in the backend pool and on the port defined in the health probe. For HTTP and HTTPS probes, a successful ping requires an HTTP 200 OK response whereas with TCP probes any response is considered successful. The consecutive successes or failures of each probe determine the health of the backend instance and whether the assigned backend pool is able to receive traffic. Similar to data path availability we use the average aggregation, which tells us the average successful/total pings during the sampling interval. This health probe status value indicates the backend health in isolation from your load balancer by probing your backend instances without sending traffic through the frontend. >[!IMPORTANT] >Health probe status is sampled on a one minute basis. This can lead to minor fluctuations in an otherwise steady value. For example, if there are two backend instances, one probed up and one probed down, the health probe service may capture 7 samples for the healthy instance and 6 for the unhealthy instance. This will lead to a previously steady value of 50 showing as 46.15 for a one minute interval. The health probe status metric is generated by a ping of the protocol defined in ## Diagnose degraded and unavailable load balancers As outlined in the [resource health article](load-balancer-standard-diagnostics.md#resource-health-status), a degraded load balancer is one that shows between 25% and 90% data path availability, and an unavailable load balancer is one with less than 25% data path availability, over a two-minute period. These same steps can be taken to investigate the failure you see in any health probe status or data path availability alerts you've configured. We'll explore the case where we've checked our resource health and found our load balancer to be unavailable with a data path availability of 0% - our service is down. -First, we go to the detailed metrics view of our load balancer insights blade. You can do this via your load balancer resource blade or the link in your resource health message. Next we navigate to the Frontend and Backend availability tab and review a thirty-minute window of the time period when the degraded or unavailable state occurred. If we see our data path availability has been 0%, we know there's an issue preventing traffic for all of our load-balancing and inbound NAT rules and can see how long this impact has lasted. +First, we go to the detailed metrics view of our load balancer insights page in the Azure portal. You can do this via your load balancer resource page or the link in your resource health message. Next we navigate to the Frontend and Backend availability tab and review a thirty-minute window of the time period when the degraded or unavailable state occurred. If we see our data path availability has been 0%, we know there's an issue preventing traffic for all of our load-balancing and inbound NAT rules and can see how long this impact has lasted. -The next place we need to look is our health probe status metric to determine whether our data path is unavailable is because we have no healthy backend instances to serve traffic. If we have at least one healthy backend instance for all of our load-balancing and inbound rules, we know it is not our configuration causing our data paths to be unavailable. This scenario indicates an Azure platform issue, while rare, do not fret if you find these as an automated alert is sent to our team to rapidly resolve all platform issues. +The next place we need to look is our health probe status metric to determine whether our data path is unavailable is because we have no healthy backend instances to serve traffic. If we have at least one healthy backend instance for all of our load-balancing and inbound rules, we know it isn't our configuration causing our data paths to be unavailable. This scenario indicates an Azure platform issue. While platform issues are rare, an automated alert is sent to our team to rapidly resolve all platform issues. ## Diagnose health probe failures Let's say we check our health probe status and find out that all instances are showing as unhealthy. This finding explains why our data path is unavailable as traffic has nowhere to go. We should then go through the following checklist to rule out common configuration errors:-* Check the CPU utilization for your resources to check if your instances are in fact healthy - * You can check this +* Check the CPU utilization for your resources to determine if they are under high load. + * You can check this by viewing the resource's Percentage CPU metric via the Metrics page. Learn how to [Troubleshoot high-CPU issues for Azure virtual machines](/troubleshoot/azure/virtual-machines/troubleshoot-high-cpu-issues-azure-windows-vm). * If using an HTTP or HTTPS probe check if the application is healthy and responsive- * Validate this is functional by directly accessing the applications through the private IP address or instance-level public IP address associated with your backend instance + * Validate application is functional by directly accessing the applications through the private IP address or instance-level public IP address associated with your backend instance * Review the Network Security Groups applied to our backend resources. Ensure that there are no rules of a higher priority than AllowAzureLoadBalancerInBound that will block the health probe * You can do this by visiting the Networking blade of your backend VMs or Virtual Machine Scale Sets * If you find this NSG issue is the case, move the existing Allow rule or create a new high priority rule to allow AzureLoadBalancer traffic-* Check your OS. Ensure your VMs are listening on the probe port and review their OS firewall rules to ensure they are not blocking the probe traffic originating from IP address `168.63.129.16` +* Check your OS. Ensure your VMs are listening on the probe port and review their OS firewall rules to ensure they aren't blocking the probe traffic originating from IP address `168.63.129.16` * You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal * Don't place a firewall NVA VM in the backend pool of the load balancer, use [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to route traffic to backend instances through the firewall * Ensure you're using the right protocol, if using HTTP to probe a port listening for a non-HTTP application the probe will fail |
machine-learning | How To Attach Kubernetes Anywhere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-anywhere.md | Train model in cloud, deploy model on-premises | Cloud | Make use of cloud compu ## KubernetesCompute and legacy AksCompute -With AzureML CLI/Python SDK v1, you can deploy models on AKS using AksCompute target. Both KubernetesCompute target and AksCompute target support AKS integration, however they support it differently. Following table shows key differences. +With AzureML CLI/Python SDK v1, you can deploy models on AKS using AksCompute target. Both KubernetesCompute target and AksCompute target support AKS integration, however they support it differently. The following table shows their key differences: |Capabilities |AKS integration with AksCompute (legacy) |AKS integration with KubernetesCompute| |--|--|--| With AzureML CLI/Python SDK v1, you can deploy models on AKS using AksCompute ta |Batch inference | No | Yes | |Real-time inference new features | No new features development | Active roadmap | -With these key differences and overall AzureML evolves to use SDK/CLI v2, AzureML recommends you to use Kubernetes compute target to deploy models if you decide to use AKS for model deployment. +With these key differences and overall AzureML evolution to use SDK/CLI v2, AzureML recommends you to use Kubernetes compute target to deploy models if you decide to use AKS for model deployment. ## Next steps For any AzureML example, you only need to update the compute target name to your * Explore model deployment with online endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes) * Explore batch endpoint samples with CLI v2 - [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/batch) * Explore training job samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs)-* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes) +* Explore model deployment with online endpoint samples with SDK v2 -[https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes) |
machine-learning | How To Inference Server Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md | Score: POST 127.0.0.1:<port>/score <logs> ``` -For example, when you launch the server followed the (end-to-end example)[#end-to-end-example]: +For example, when you launch the server followed the [end-to-end example](#end-to-end-example): ``` Azure ML Inferencing HTTP server v0.8.0 |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md | In this article you learn how to secure the following training compute resources ## Compute instance/cluster with no public IP > [!IMPORTANT]-> If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20 (when the feature is generally available). +> If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20, 2023 (when the feature is generally available). > > If you were previously using the preview of no public IP, you may also need to modify what traffic you allow inbound and outbound, as the requirements have changed for general availability: > * Outbound requirements - Two additional outbound, which are only used for the management of compute instances and clusters. The destination of these service tags are owned by Microsoft: |
machine-learning | Quickstart Spark Data Wrangling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-data-wrangling.md | + + Title: "Quickstart: Interactive Data Wrangling with Apache Spark (preview)" ++description: Learn how to perform interactive data wrangling with Apache Spark in Azure Machine Learning ++++++ Last updated : 02/06/2023+#Customer intent: As a Full Stack ML Pro, I want to perform interactive data wrangling in Azure Machine Learning, with Apache Spark. +++# Quickstart: Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview) ++++To handle interactive Azure Machine Learning notebook data wrangling, Azure Machine Learning integration, with Azure Synapse Analytics (preview), provides easy access to the Apache Spark framework. This access allows for Azure Machine Learning Notebook interactive data wrangling. ++In this quickstart guide, you'll learn how to perform interactive data wrangling using Azure Machine Learning Managed (Automatic) Synapse Spark compute, Azure Data Lake Storage (ADLS) Gen 2 storage account, and user identity passthrough. ++## Prerequisites +- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin. +- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md). +- An Azure Data Lake Storage (ADLS) Gen 2 storage account. See [Create an Azure Data Lake Storage (ADLS) Gen 2 storage account](../storage/blobs/create-data-lake-storage-account.md). +- To enable this feature: + 1. Navigate to the Azure Machine Learning studio UI + 2. In the icon section at the top right of the screen, select **Manage preview features** (megaphone icon) + 3. In the **Managed preview feature** panel, toggle the **Run notebooks and jobs on managed Spark** feature to **on** + :::image type="content" source="media/quickstart-spark-data-wrangling/how-to-enable-managed-spark-preview.png" lightbox="media/quickstart-spark-data-wrangling/how-to-enable-managed-spark-preview.png" alt-text="Screenshot showing the option to enable the Managed Spark preview."::: ++## Add role assignments in Azure storage accounts ++We must ensure that the input and output data paths are accessible, before we start interactive data wrangling. To enable read and write access, assign **Contributor** and **Storage Blob Data Contributor** roles to the user identity of the logged-in user. ++To assign appropriate roles to the user identity: ++1. In the Microsoft Azure portal, navigate to the Azure Data Lake Storage (ADLS) Gen 2 storage account page +1. Select **Access Control (IAM)** from the left panel +1. Select **Add role assignment** ++ :::image type="content" source="media/quickstart-spark-data-wrangling/storage-account-add-role-assignment.png" lightbox="media/quickstart-spark-data-wrangling/storage-account-add-role-assignment.png" alt-text="Screenshot showing the Azure access keys screen."::: ++1. Find and select role **Storage Blob Data Contributor** +1. Select **Next** ++ :::image type="content" source="media/quickstart-spark-data-wrangling/add-role-assignment-choose-role.png" lightbox="media/quickstart-spark-data-wrangling/add-role-assignment-choose-role.png" alt-text="Screenshot showing the Azure add role assignment screen."::: ++1. Select **User, group, or service principal**. +1. Select **+ Select members**. +1. Search for the user identity below **Select** +1. Select the user identity from the list, so that it shows under **Selected members** +1. Select the appropriate user identity +1. Select **Next** ++ :::image type="content" source="media/quickstart-spark-data-wrangling/add-role-assignment-choose-members.png" lightbox="media/quickstart-spark-data-wrangling/add-role-assignment-choose-members.png" alt-text="Screenshot showing the Azure add role assignment screen Members tab."::: ++1. Select **Review + Assign** ++ :::image type="content" source="media/quickstart-spark-data-wrangling/add-role-assignment-review-and-assign.png" lightbox="media/quickstart-spark-data-wrangling/add-role-assignment-review-and-assign.png" alt-text="Screenshot showing the Azure add role assignment screen review and assign tab."::: +1. Repeat steps 2-13 for **Contributor** role assignment. ++Once the user identity has the appropriate roles assigned, data in the Azure storage account should become accessible. ++## Managed (Automatic) Spark compute in Azure Machine Learning Notebooks ++A Managed (Automatic) Spark compute is available in Azure Machine Learning Notebooks by default. To access it in a notebook, start in the **Compute** selection menu, and select **AzureML Spark Compute** under **Azure Machine Learning Spark**. +++## Interactive data wrangling with Titanic data ++> [!TIP] +> Data wrangling with a Managed (Automatic) Spark compute, and user identity passthrough for data access in a Azure Data Lake Storage (ADLS) Gen 2 storage account, both require the lowest number of configuration steps. ++The data wrangling code shown here uses the `titanic.csv` file, available [here](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/spark/data/titanic.csv). Upload this file to a container created in the Azure Data Lake Storage (ADLS) Gen 2 storage account. This Python code snippet shows interactive data wrangling with an Azure Machine Learning Managed (Automatic) Spark compute, user identity passthrough, and an input/output data URI, in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>`: ++```python +import pyspark.pandas as pd +from pyspark.ml.feature import Imputer ++df = pd.read_csv( + "abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/data/titanic.csv", + index_col="PassengerId", +) +imputer = Imputer(inputCols=["Age"], outputCol="Age").setStrategy( + "mean" +) # Replace missing values in Age column with the mean value +df.fillna( + value={"Cabin": "None"}, inplace=True +) # Fill Cabin column with value "None" if missing +df.dropna(inplace=True) # Drop the rows which still have any missing value +df.to_csv( + "abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/data/wrangled", + index_col="PassengerId", +) +``` ++> [!NOTE] +> Only the Spark runtime version 3.2 supports `pyspark.pandas`, used in this Python code sample. +++## Next steps +- [Apache Spark in Azure Machine Learning (preview)](./apache-spark-azure-ml-concepts.md) +- [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md) +- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md) +- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md) +- [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark) +- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark) |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md | For more information on using Azure Databricks in a virtual network, see [Deploy ## Compute instance/cluster with no public IP > [!IMPORTANT]-> If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20 (when the feature is generally available). +> If you have been using compute instances or compute clusters configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20, 2023 (when the feature is generally available). > > If you were previously using the preview of no public IP, you may also need to modify what traffic you allow inbound and outbound, as the requirements have changed for general availability: > * Outbound requirements - Two additional outbound, which are only used for the management of compute instances and clusters. The destination of these service tags are owned by Microsoft: |
marketplace | Commercial Marketplace Lead Management Instructions Marketo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md | Title: Lead management in Marketo - Microsoft commercial marketplace description: Learn how to use a Marketo CRM system to manage leads from Microsoft AppSource and Azure Marketplace. -+ Last updated 06/08/2022 Last updated 06/08/2022 # Use Marketo to manage commercial marketplace leads > [!IMPORTANT]-> The marketo connector is not currently working due to a change in the Marketo platform. Use Leads from the Referrals workspace. +> The Marketo connector has been restored. Update your configurations to receive leads as shown in this article. This article describes how to set up your Marketo CRM system to process sales leads from your offers in Microsoft AppSource and Azure Marketplace. This article describes how to set up your Marketo CRM system to process sales le 1. Select **Design Studio**. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-1.png" alt-text="Screenshot showing Marketo Design Studio."::: -1. Select **New Form**. +1. Select **New Form**. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-2.png" alt-text="Screenshot showing Marketo Design Studio New Form."::: -1. Fill in the required fields in the **New Form** dialog box, and then select **Create**. +1. Fill in the required fields in the **New Form** dialog box, and then select **Create**. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-3.png" alt-text="Screenshot showing Marketo Design Studio New Form creation."::: + +1. Ensure that the fields mappings are setup correctly. Here are the list of fields that the connector needs to be setup on the form. -1. On the **Field Details** page, select **Finish**. + > [!NOTE] + > The field with name "Lead Source" is expected to be configured in the form. It can be mapped to the **SourceSystemName** system field in Marketo or a custom field. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-field-details.png" alt-text="Screenshot showing Marketo new form details."::: -1. Approve and close. +1. On the **Field Details** page, select **Finish**. ++ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-4.png" alt-text="Screenshot showing finishing the Marketo creation form."::: ++1. Approve and close. 1. On the **MarketplaceLeadBackend** tab, select **Embed Code**. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-6.png" alt-text="Screenshot showing the Marketo Embed Code form."::: 1. Marketo Embed Code displays code similar to the following example. - ```html - <form id="mktoForm_1179"></form> - <script>MktoForms2.loadForm("("//app-ys12.marketo.com", "123-PQR-789", 1179);</script> - ``` + ```html + <form id="mktoForm_1179"></form> + <script>MktoForms2.loadForm("("//app-ys12.marketo.com", "123-PQR-789", 1179);</script> + ``` 1. Copy the values for the following fields shown in the Embed Code form. You'll use these values to configure your offer to receive leads in the next step. Use the next example as a guide for getting the IDs you need from the Marketo Embed Code example. - - Server ID = **ys12** - - Munchkin ID = **123-PQR-789** - - Form ID = **1179** + - Munchkin ID = **123-PQR-789** + - Form ID = **1179** ++ The following is another way to figure out these values: ++ - Get your subscription's Munchkin ID by going to your **Admin** > **Munchkin** menu in the **Munchkin Account ID** field, or from the first part of your Marketo REST API host subdomain: `https://{Munchkin ID}.mktorest.com`. + - Form ID is the ID of the Embed Code form you created in step 7 to route leads from the marketplace. ++## Obtain a API access from your Marketo Admin - Another way to figure out these values: +1. See this [Marketo article on getting API access](https://aka.ms/marketo-api), specifically a **ClientID** and **Client Secret** needed for the new Marketo configuration. Follow the step-by-step guide to create an API-only user and a Launchpoint connection for the Partner Center lead management service. - - Server ID is found in the URL of your Marketo instance, for example, `serverID.marketo.com`. - - Get your subscription's Munchkin ID by going to your **Admin** > **Munchkin** menu in the **Munchkin Account ID** field, or from the first part of your Marketo REST API host subdomain: `https://{Munchkin ID}.mktorest.com`. - - Form ID is the ID of the Embed Code form you created in step 7 to route leads from the marketplace. +1. Ensure that the **Custom service created** indicates Partner Center as shown below. ++ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-new-service.png" alt-text="Screenshot showing Marketo API new service form"::: ++1. Once you click the View details link for the new service created, you can copy the **Client ID** and **Client secret** for use in the Partner center connector configuration. ++ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-admin-installed-services.png" alt-text="Screenshot showing the Marketo admin installed services."::: ++ :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-api-access-details.png" alt-text="Screenshot showing the Marketo API access details."::: ## Configure your offer to send leads to Marketo When you're ready to configure the lead management information for your offer in the publishing portal, follow these steps. -1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290). +1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/home) and select **Marketplace offers**. 1. Select your offer, and go to the **Offer setup** tab. 1. Under the **Customer leads** section, select **Connect**. - :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/customer-leads.png" alt-text="Customer leads"::: + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/customer-leads.png" alt-text="Screenshot showing the Partner Center customer leads page."::: 1. On the **Connection details** pop-up window, select **Marketo** for the **Lead destination**. -  --1. Provide the **Server ID**, **Munchkin account ID**, and **Form ID**. + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/choose-lead-destination.png" alt-text="Screenshot showing the Partner Center customer lead destination."::: - > [!NOTE] - > You must finish configuring the rest of the offer and publish it before you can receive leads for the offer. +1. Provide the **Munchkin ID**, **Form ID**, **Client ID** and **Client Secret** fields. -1. Under **Contact email**, enter email addresses for people in your company who should receive email notifications when a new lead is received. You can provide multiple email addresses by separating them with a semicolon. + > [!NOTE] + > You must finish configuring the rest of the offer and publish it before you can receive leads for the offer. 1. Select **OK**. To make sure you've successfully connected to a lead destination, select **Validate**. If successful, you'll have a test lead in the lead destination. -  + :::image type="content" source="./media/commercial-marketplace-lead-management-instructions-marketo/marketo-connection-details.png" alt-text="Screenshot showing the Partner Center connection details."::: |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md | Before you begin this tutorial, check that you have these prerequisites in place Requirement | Details | -**vCenter Server/ESXi host** | You need a server running vCenter Server version 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. +**vCenter Server/ESXi host** | You need a server running vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy. **Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). |
mysql | Concepts Service Tiers Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md | Remember that storage once auto-scaled up, cannot be scaled down. Azure Database for MySQL ΓÇô Flexible Server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time. -The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table].(#compute-tiers-size-and-server-types) +The minimum IOPS is 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table](#service-tiers-size-and-server-types). The maximum IOPS is dependent on the maximum available IOPS per compute size. Refer to the column *Max uncached disk throughput: IOPS/MBps* in the [B-series](../../virtual-machines/sizes-b-series-burstable.md), [Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md), and [Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)/ [Edsv5-series](../../virtual-machines/edv5-edsv5-series.md)] documentation. If you would like to optimize server cost, you can consider following tips: ## Next steps - Learn how to [create a MySQL server in the portal](quickstart-create-server-portal.md).-- Learn about [service limitations](concepts-limitations.md).+- Learn about [service limitations](concepts-limitations.md). |
openshift | Support Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md | Azure Red Hat OpenShift is built from specific releases of OCP. This article cov ## Red Hat OpenShift versions -Red Hat OpenShift Container Platform uses semantic versioning. Semantic versioning uses different levels of version numbers to specify different levels of versioning. The following table illustrates the different parts of a semantic version number, in this case using the example version number 4.4.11. +Red Hat OpenShift Container Platform uses semantic versioning. Semantic versioning uses different levels of version numbers to specify different levels of versioning. The following table illustrates the different parts of a semantic version number, in this case using the example version number 4.10.3. |Major version (x)|Minor version (y)|Patch (z)| |-|-|-|-|4|4|11| +|4|10|3| Each number in the version indicates general compatibility with the previous version: Each number in the version indicates general compatibility with the previous ver * **Minor version**: Released approximately every three months. Minor version upgrades can include feature additions, enhancements, deprecations, removals, bug fixes, security enhancements, and other improvements. * **Patches**: Typically released each week, or as needed. Patch version upgrades can include bug fixes, security enhancements, and other improvements. -Customers should aim to run the latest minor release of the major version they're running. For example, if your production cluster is on 4.4, and 4.5 is the latest generally available minor version for the 4 series, you should upgrade to 4.5 as soon as you can. +Customers should aim to run the latest minor release of the major version they're running. For example, if your production cluster is on 4.9, and 4.10 is the latest generally available minor version for the 4 series, you should upgrade to 4.10 as soon as you can. ### Upgrade channels -Upgrade channels are tied to a minor version of Red Hat OpenShift Container Platform (OCP). For instance, OCP 4.4 upgrade channels will never include an upgrade to a 4.5 release. Upgrade channels control only release selection and don't impact the version of the cluster. +Upgrade channels are tied to a minor version of Red Hat OpenShift Container Platform (OCP). For instance, OCP 4.9 upgrade channels will never include an upgrade to a 4.10 release. Upgrade channels control only release selection and don't impact the version of the cluster. -Azure Red Hat OpenShift 4 supports stable channels only. For example: stable-4.4. +Azure Red Hat OpenShift 4 supports stable channels only. For example: stable-4.9. -You can use the stable-4.5 channel to upgrade from a previous minor version of Azure Red Hat OpenShift. Clusters upgraded using fast, prerelease, and candidate channels won't be supported. +You can use the stable-4.10 channel to upgrade from a previous minor version of Azure Red Hat OpenShift. Clusters upgraded using fast, prerelease, and candidate channels won't be supported. If you change to a channel that doesn't include your current release, an alert displays and no updates can be recommended. However, you can safely change back to your original channel at any point. If available in a stable upgrade channel, newer minor releases (N+1, N+2) availa Critical patch updates are applied to clusters automatically by Azure Red Hat OpenShift Site Reliability Engineers (SRE). Customers that wish to install patch updates in advance are free to do so. -For example, if Azure Red Hat OpenShift introduces 4.5.z today, support is provided for the following versions: +For example, if Azure Red Hat OpenShift introduces 4.10.z today, support is provided for the following versions: |New minor version|Supported version list| |-|-|-|4.5.z|4.5.z, 4.4.z| +|4.10.z|4.10.z, 4.9.z| -".z" is representative of patch versions. If available in a stable upgrade channel, customers may also upgrade to 4.6.z. +".z" is representative of patch versions. If available in a stable upgrade channel, customers may also upgrade to 4.9.z. -When a new minor version is introduced, the oldest minor version is deprecated and removed. For example, say the current supported version list is 4.5.z and 4.4.z. When Azure Red Hat OpenShift releases 4.6.z, the 4.4.z release will be removed and will be out of support within 30 days. +When a new minor version is introduced, the oldest minor version is deprecated and removed. For example, say the current supported version list is 4.10.z and 4.9.z. When Azure Red Hat OpenShift releases 4.11.z, the 4.9.z release will be removed and will be out of support within 30 days. > [!NOTE] > Please note that if customers are running an unsupported Red Hat OpenShift version, they may be asked to upgrade when requesting support for the cluster. Clusters running unsupported Red Hat OpenShift releases are not covered by the Azure Red Hat OpenShift SLA. Specific patch releases may be skipped, or rollout may be accelerated depending ## Azure portal and CLI versions -When you deploy an Azure Red Hat OpenShift cluster in the portal or with the Azure CLI, the cluster is defaulted to the latest (N) minor version and latest critical patch. For example, if Azure Red Hat OpenShift supports 4.5.z and 4.4.z, the default version for new installations is 4.5.z. Customers that wish to use the latest upstream OCP minor version (N+1, N+2) can upgrade their cluster at any time to any release available in the stable upgrade channels. +When you deploy an Azure Red Hat OpenShift cluster in the portal or with the Azure CLI, the cluster is defaulted to the latest (N) minor version and latest critical patch. For example, if Azure Red Hat OpenShift supports 4.10.z and 4.9.z, the default version for new installations is 4.10.z. Customers that wish to use the latest upstream OCP minor version (N+1, N+2) can upgrade their cluster at any time to any release available in the stable upgrade channels. ## Azure Red Hat OpenShift release calendar See the following guide for the [past Red Hat OpenShift Container Platform (upst If you're on the N-2 version or older, it means you are outside of support and will be asked to upgrade to continue receiving support. When your upgrade from version N-2 to N-1 succeeds, you're back within support. Upgrading from version N-3 version or older to a supported version can be challenging, and in some cases not possible. We recommend you keep your cluster on the latest OpenShift version to avoid potential upgrade issues. For example:-* If the oldest supported Azure Red Hat OpenShift version is 4.4.z and you are on 4.3.z or older, you are outside of support. -* When the upgrade from 4.3.z to 4.4.z or higher succeeds, you're back within our support policies. +* If the oldest supported Azure Red Hat OpenShift version is 4.9.z and you are on 4.8.z or older, you are outside of support. +* When the upgrade from 4.8.z to 4.9.z or higher succeeds, you're back within our support policies. Reverting your cluster to a previous version, or a rollback, isn't supported. Only upgrading to a newer version is supported. |
postgresql | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md | Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t #### List of enhanced metrics -##### `Activity` +##### Activity -|Display Name |Metric ID |Unit |Description |Dimension |Default enabled | -|--|-|-|||| +|Display Name |Metric ID |Unit |Description |Dimension |Default enabled | +||-|-|||| |**Sessions By State** (Preview) |sessions_by_state |Count |Overall state of the backends |State |No | |**Sessions By WaitEventType** (Preview)|sessions_by_wait_event_type |Count |Sessions by the type of event for which the backend is waiting |Wait Event Type|No | |**Oldest Backend** (Preview) |oldest_backend_time_sec |Seconds|The age in seconds of the oldest backend (irrespective of the state) |N/a |No | Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t |**Oldest xmin** (Preview) |oldest_backend_xmin |Count |The actual value of the oldest xmin. If xmin is not increasing it indicates there are some long running transactions that can potentially hold dead tuples from being removed |N/a |No | |**Oldest xmin Age** (Preview) |oldest_backend_xmin_age |Count |Age in units of the oldest xmin. It indicated how many transactions passed since oldest xmin |N/a |No | --##### `Database` +##### Database |Display Name |Metric ID |Unit |Description |Dimension |Default enabled | |-|-|--|-|-|| Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t |**Tuples Returned** (Preview) |tup_returned |Count|Number of rows returned by queries in this database |Database Name|No | |**Tuples Updated** (Preview) |tup_updated |Count|Number of rows updated by queries in this database |Database Name|No | -##### `Logical Replication` +##### Logical Replication |Display Name |Metric ID |Unit |Description |Dimension|Default enabled | |-|-|--|||| |**Max Logical Replication Lag** (Preview)|logical_replication_delay_in_bytes|Bytes|Maximum lag across all logical replication slots|N/a |Yes | -##### `Replication` +##### Replication |Display Name |Metric ID |Unit |Description |Dimension|Default enabled | |--|-|-|--||| Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t |**Read Replica Lag** (Preview) |physical_replication_delay_in_seconds|Seconds|Read Replica lag in seconds |N/a |Yes | -##### `Saturation` +##### Saturation |Display Name |Metric ID |Unit |Description |Dimension|Default enabled | ||-|-|--||| |**Disk Bandwidth Consumed Percentage**|disk_bandwidth_consumed_percentage|Percent|Percentage of data disk bandwidth consumed per minute|N/a |Yes | |**Disk IOPS Consumed Percentage** |disk_iops_consumed_percentage |Percent|Percentage of data disk I/Os consumed per minute |N/a |Yes | -##### `Traffic` +##### Traffic |Display Name|Metric ID |Unit |Description |Dimension|Default enabled | |-||--|||| Introducing Enhanced Metrics for Azure Database for PostgreSQL Flexible Server t ^ **Max Connections** here represents the configured value for _max_connections_ server parameter, and this metric is pooled every 30 minutes. +## Autovacuum metrics ++Autovaccum metrics can be used to monitor and tune autovaccum performance for Azure database for postgres flexible server. Each metric is emitted at a **30 minute** frequency, and has up to **93 days** of retention. Customers can configure alerts on the metrics and can also access the new metrics dimensions, to split and filter the metrics data on database name. ++#### Enabling Autovacuum metrics +* Autovacuum metrics are disabled by default +* To enable these metrics, please turn ON the server parameter `metrics.autovacuum_diagnostics`. + * This parameter is dynamic, hence will not require instance restart. ++#### Autovacuum metrics ++|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| +|-|-|--|--||| +|**Analyze Counter User Tables** (Preview) |analyze_count_user_tables |Count|Number of times user only tables have been manually analyzed in this database |DatabaseName|No | +|**AutoAnalyze Counter User Tables** (Preview) |autoanalyze_count_user_tables |Count|Number of times user only tables have been analyzed by the autovacuum daemon in this database |DatabaseName|No | +|**AutoVacuum Counter User Tables** (Preview) |autovacuum_count_user_tables |Count|Number of times user only tables have been vacuumed by the autovacuum daemon in this database |DatabaseName|No | +|**Estimated Dead Rows User Tables** (Preview) |n_dead_tup_user_tables |Count|Estimated number of dead rows for user only tables in this database |DatabaseName|No | +|**Estimated Live Rows User Tables** (Preview) |n_live_tup_user_tables |Count|Estimated number of live rows for user only tables in this database |DatabaseName|No | +|**Estimated Modifications User Tables** (Preview)|n_mod_since_analyze_user_tables|Count|Estimated number of rows modified since user only tables were last analyzed |DatabaseName|No | +|**User Tables Analyzed** (Preview) |tables_analyzed_user_tables |Count|Number of user only tables that have been analyzed in this database |DatabaseName|No | +|**User Tables AutoAnalyzed** (Preview) |tables_autoanalyzed_user_tables|Count|Number of user only tables that have been analyzed by the autovacuum daemon in this database |DatabaseName|No | +|**User Tables AutoVacuumed** (Preview) |tables_autovacuumed_user_tables|Count|Number of user only tables that have been vacuumed by the autovacuum daemon in this database |DatabaseName|No | +|**User Tables Counter** (Preview) |tables_counter_user_tables |Count|Number of user only tables in this database |DatabaseName|No | +|**User Tables Vacuumed** (Preview) |tables_vacuumed_user_tables |Count|Number of user only tables that have been vacuumed in this database |DatabaseName|No | +|**Vacuum Counter User Tables** (Preview) |vacuum_count_user_tables |Count|Number of times user only tables have been manually vacuumed in this database (not counting VACUUM FULL)|DatabaseName|No | ++ -#### Applying Filters and Splitting on enhanced metrics +#### Applying filters and splitting on metrics with dimension In the above list of metrics, some of the metrics have dimension such as `database name`, `state` etc. [Filtering](../../azure-monitor/essentials/metrics-charts.md#filters) and [Splitting](../../azure-monitor/essentials/metrics-charts.md#apply-splitting) are allowed for the metrics that have dimensions. These features show how various metric segments ("dimension values") affect the overall value of the metric. You can use them to identify possible outliers. |
purview | Microsoft Purview Connector Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md | The table below shows the supported capabilities for each data source. Select th || [Hive Metastore Database](register-scan-hive-metastore-source.md) | [Yes](register-scan-hive-metastore-source.md#register) | No | [Yes*](register-scan-hive-metastore-source.md#lineage) | No| No | || [MongoDB](register-scan-mongodb.md) | [Yes](register-scan-mongodb.md#register) | No | No | No | No | || [MySQL](register-scan-mysql.md) | [Yes](register-scan-mysql.md#register) | No | [Yes](register-scan-mysql.md#lineage) | No | No |-|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| [Yes](register-scan-oracle-source.md#scan) | [Yes*](register-scan-oracle-source.md#lineage) | No| No | +|| [Oracle](register-scan-oracle-source.md) | [Yes](register-scan-oracle-source.md#register)| [Yes](register-scan-oracle-source.md#scan) | [Yes*](register-scan-oracle-source.md#lineage) | No| No | || [PostgreSQL](register-scan-postgresql.md) | [Yes](register-scan-postgresql.md#register) | No | [Yes](register-scan-postgresql.md#lineage) | No | No | || [SAP Business Warehouse](register-scan-sap-bw.md) | [Yes](register-scan-sap-bw.md#register) | No | No | No | No | || [SAP HANA](register-scan-sap-hana.md) | [Yes](register-scan-sap-hana.md#register) | No | No | No | No |-|| [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | No | [Yes](register-scan-snowflake.md#lineage) | No | No | +|| [Snowflake](register-scan-snowflake.md) | [Yes](register-scan-snowflake.md#register) | [Yes](register-scan-snowflake.md#scan) | [Yes](register-scan-snowflake.md#lineage) | No | No | || [SQL Server](register-scan-on-premises-sql-server.md)| [Yes](register-scan-on-premises-sql-server.md#register) |[Yes](register-scan-on-premises-sql-server.md#scan) | No* | No| No | || [SQL Server on Azure-Arc](register-scan-azure-arc-enabled-sql-server.md)| [Yes](register-scan-azure-arc-enabled-sql-server.md#register) | [Yes](register-scan-azure-arc-enabled-sql-server.md#scan) | No* |[Yes](register-scan-azure-arc-enabled-sql-server.md#access-policy) | No | || [Teradata](register-scan-teradata-source.md)| [Yes](register-scan-teradata-source.md#register)| [Yes](register-scan-teradata-source.md#scan)| [Yes*](register-scan-teradata-source.md#lineage) | No| No | |
purview | Register Scan Snowflake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md | This article outlines how to register Snowflake, and how to authenticate and int |**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**| |||||||||-| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage) | No| +| [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | [Yes](#scan) | No| [Yes](#lineage) | No| When scanning Snowflake source, Microsoft Purview supports: To create and run a new scan, follow these steps: 1. Select **Continue**. +1. Select a **scan rule set** for classification. You can choose between the system default, existing custom rule sets, or [create a new rule set](create-a-scan-rule-set.md) inline. Check the [Classification](apply-classifications.md) article to learn more. + 1. Choose your **scan trigger**. You can set up a schedule or ran the scan once. 1. Review your scan and select **Save and Run**. |
reliability | Migrate App Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service-environment.md | This guide describes how to migrate an App Service Environment from non-availabi Azure App Service Environment can be deployed across [availability zones (AZ)](../reliability/availability-zones-overview.md) to help you achieve resiliency and reliability for your business-critical workloads. This architecture is also known as zone redundancy. -When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across all three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones. +When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App Service plan across three zones in the selected region. This means that the minimum App Service Plan instance count will always be three. If you specify a capacity larger than three, and the number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond 3*N are spread across the remaining one or two zones. ## Prerequisites If you want your App Service Environment to use availability zones, redeploy you Traffic is routed to all of your available App Service instances. In the case when a zone goes down, the App Service platform will detect lost instances and automatically attempt to find new replacement instances and spread traffic as needed. If you have [autoscale](../app-service/manage-scale-up.md) configured, and if it decides more instances are needed, autoscale will also issue a request to App Service to add more instances. Note that [autoscale behavior is independent of App Service platform behavior](../azure-monitor/autoscale/autoscale-overview.md) and that your autoscale instance count specification doesn't need to be a multiple of three. It's also important to note there's no guarantee that requests for additional instances in a zone-down scenario will succeed since back filling lost instances occurs on a best-effort basis. The recommended solution is to create and configure your App Service plans to account for losing a zone as described in the next section. -Applications that are deployed in an App Service Environment that has availability zones enabled will continue to run and serve traffic even if other zones in the same region suffer an outage. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environments only ensures continued uptime for deployed applications. +Applications that are deployed in an App Service Environment that has availability zones enabled will continue to run and serve traffic if a single zone becomes unavailable. However it's possible that non-runtime behaviors including App Service plan scaling, application creation, application configuration, and application publishing may still be impacted from an outage in other Availability Zones. Zone redundancy for App Service Environments only ensures continued uptime for deployed applications. When the App Service platform allocates instances to a zone redundant App Service plan, it uses [best effort zone balancing offered by the underlying Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing). An App Service plan will be "balanced" if each zone has either the same number of VMs, or +/- one VM in all of the other zones used by the App Service plan. There's a minimum charge of nine App Service plan instances in a zone redundant ## Next steps > [!div class="nextstepaction"]-> [Azure services and regions that support availability zones](availability-zones-service-support.md) +> [Azure services and regions that support availability zones](availability-zones-service-support.md) |
remote-rendering | Configure Model Conversion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/configure-model-conversion.md | The `none` mode has the least runtime overhead and also slightly better loading ### Physics parameters -* `generateCollisionMesh` - If you need support for [spatial queries](../../overview/features/spatial-queries.md) on a model, this option has to be enabled. In the worst case, the creation of a collision mesh can double the conversion time. Models with collision meshes take longer to load and when using a `dynamic` scene graph, they also have a higher runtime performance overhead. For overall optimal performance, you should disable this option on all models on which you don't need spatial queries. +* `generateCollisionMesh` - If you need support for [spatial queries](../../overview/features/spatial-queries.md) on a model, this option has to be enabled. Collision mesh generation does not add any extra conversion time and also does not increase the output file size. Furthermore, the loading time and runtime cost of a model with collision meshes is only insignificantly higher. +Accordingly this flag can be left to default (enabled) unless there are strong reasons to exclude a model from spatial queries. ### Unlit materials The `none` mode has the least runtime overhead and also slightly better loading ### Coordinate system overriding -* `axis` - To override coordinate system unit-vectors. Default values are `["+x", "+y", "+z"]`. In theory, the FBX format has a header where those vectors are defined and the conversion uses that information to transform the scene. The glTF format also defines a fixed coordinate system. In practice, some assets either have incorrect information in their header or were saved with a different coordinate system convention. This option allows you to override the coordinate system to compensate. For example: `"axis" : ["+x", "+z", "-y"]` will exchange the Z-axis and the Y-axis and keep coordinate system handedness by inverting the Y-axis direction. +* `axis` - To override coordinate system unit-vectors. Default values are `["+x", "+y", "+z"]`. In theory, the FBX format has a header where those vectors are defined and the conversion uses that information to transform the scene. The glTF format also defines a fixed coordinate system. In practice, some assets either have incorrect information in their header or were saved with a different coordinate system convention. This option allows you to override the coordinate system to compensate. For example: `"axis" : ["+x", "+z", "-y"]` will exchange the Z-axis and the Y-axis and keep coordinate system handed-ness by inverting the Y-axis direction. ### Node meta data The memory footprints of the formats are as follows: Assume you have a photogrammetry model, which has lighting baked into the textures. All that is needed to render the model are :::no-loc text="vertex"::: positions and texture coordinates. -By default the converter has to assume that you may want to use PBR materials on a model at some time, so it will generate `normal`, `tangent`, and `binormal` data for you. Consequently, the per vertex memory usage is `position` (12 bytes) + `texcoord0` (8 bytes) + `normal` (4 bytes) + `tangent` (4 bytes) + `binormal` (4 byte) = 32 bytes. Larger models of this type can easily have many millions of :::no-loc text="vertices"::: resulting in models that can take up multiple gigabytes of memory. Such large amounts of data will affect performance and you may even run out of memory. +By default the converter has to assume that you may want to use PBR materials on a model at some time, so it will generate `normal`, `tangent`, and `binormal` data for you. So, the per vertex memory usage is `position` (12 bytes) + `texcoord0` (8 bytes) + `normal` (4 bytes) + `tangent` (4 bytes) + `binormal` (4 byte) = 32 bytes. Larger models of this type can easily have many millions of :::no-loc text="vertices"::: resulting in models that can take up multiple gigabytes of memory. Such large amounts of data will affect performance and you may even run out of memory. Knowing that you never need dynamic lighting on the model, and knowing that all texture coordinates are in `[0; 1]` range, you can set `normal`, `tangent`, and `binormal` to `NONE` and `texcoord0` to half precision (`16_16_FLOAT`), resulting in only 16 bytes per :::no-loc text="vertex":::. Cutting the mesh data in half enables you to load larger models and potentially improves performance. The properties that do have an effect on point cloud conversion are: * `axis` - same meaning as for triangular meshes. Default values are `["+x", "+y", "+z"]`, however most point cloud data will be rotated compared to renderer's own coordinate system. To compensate, in most cases `["+x", "+z", "-y"]` fixes the rotation. * `gammaToLinearVertex` - similar to triangular meshes, this flag indicates whether point colors should be converted from gamma space to linear space. Default value for point cloud formats (E57, PLY, LAS, LAZ and XYZ) is true. -* `generateCollisionMesh` - similar to triangular meshes, this flag needs to be enabled to support [spatial queries](../../overview/features/spatial-queries.md). But unlike for triangular meshes, this flag doesn't incurs longer conversion times, larger output file sizes, or longer runtime loading times. So disabling this flag can't be considered an optimization. +* `generateCollisionMesh` - similar to triangular meshes, this flag needs to be enabled to support [spatial queries](../../overview/features/spatial-queries.md). ## Memory optimizations There are certain classes of use cases that qualify for specific optimizations. * These types of scenes tend to be static, meaning they don't need movable parts. Accordingly, the `sceneGraphMode` can be set to `static` or even `none`, which improves runtime performance. With `static` mode, the scene's root node can still be moved, rotated, and scaled, for example to dynamically switch between 1:1 scale (for first person view) and a table top view. -* When you need to move parts around, that typically also means that you need support for raycasts or other [spatial queries](../../overview/features/spatial-queries.md), so that you can pick those parts in the first place. On the other hand, if you don't intend to move something around, chances are high that you also don't need it to participate in spatial queries and therefore can turn off the `generateCollisionMesh` flag. This switch has significant impact on conversion times, loading times, and also runtime per-frame update costs. - * If the application doesn't use [cut planes](../../overview/features/cut-planes.md), the `opaqueMaterialDefaultSidedness` flag should be turned off. The performance gain is typically 20%-30%. Cut planes can still be used, but there won't be back-faces when looking into the inner parts of objects, which looks counter-intuitive. For more information, see [:::no-loc text="single sided"::: rendering](../../overview/features/single-sided-rendering.md). ### Use case: Photogrammetry models |
remote-rendering | Spatial Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/spatial-queries.md | -All spatial queries are evaluated on the server. Accordingly, they're asynchronous operations and results will arrive with a delay that depends on your network latency. Since every spatial query generates network traffic, be careful not to do too many at once. --## Collision meshes --For triangular meshes, spatial queries are powered by the [Havok Physics](https://www.havok.com/products/havok-physics) engine and require a dedicated collision mesh to be present. By default, [model conversion](../../how-tos/conversion/model-conversion.md) generates collision meshes. If you don't require spatial queries on a complex model, consider disabling collision mesh generation in the [conversion options](../../how-tos/conversion/configure-model-conversion.md), as it has an impact in multiple ways: --* [Model conversion](../../how-tos/conversion/model-conversion.md) will take considerably longer. -* Converted model file sizes are noticeably larger, impacting download speed. -* Runtime loading times are longer. -* Runtime CPU memory consumption is higher. -* There's a slight runtime performance overhead for every model instance. --For point clouds, none of these drawbacks apply. +All spatial queries are evaluated on the server. Accordingly, the queries are asynchronous operations and results will arrive with a delay that depends on your network latency. ## Ray casts A *ray cast* is a spatial query where the runtime checks which objects are intersected by a ray, starting at a given position and pointing into a certain direction. As an optimization, a maximum ray distance is also given, to not search for objects that are too far away.+Although doing hundreds of ray casts each frame is computationally feasible on the server side, each query also generates network traffic, so the number of queries per frame should be kept as low as possible. + ```cs async void CastRay(RenderingSession session) void CastRay(ApiHandle<RenderingSession> session) } ``` - There are three hit collection modes: * **`Closest`:** In this mode, only the closest hit will be reported. A Hit has the following properties: * **`HitPosition`:** The world space position where the ray intersected the object. * **`HitNormal`:** The world space surface normal of the mesh at the position of the intersection. * **`DistanceToHit`:** The distance from the ray starting position to the hit.-* **`HitType`:** What was hit by the ray: `TriangleFrontFace`, `TriangleBackFace` or `Point`. By default, [ARR renders double sided](single-sided-rendering.md#prerequisites) so the triangles the user sees are not necessarily front facing. If you want to differentiate between `TriangleFrontFace` and `TriangleBackFace` in your code, make sure your models are authored with correct face directions first. +* **`HitType`:** What was hit by the ray: `TriangleFrontFace`, `TriangleBackFace` or `Point`. By default, [ARR renders double sided](single-sided-rendering.md#prerequisites) so the triangles the user sees aren't necessarily front facing. If you want to differentiate between `TriangleFrontFace` and `TriangleBackFace` in your code, make sure your models are authored with correct face directions first. ## Spatial queries -A *spatial query* allows for the runtime to check which [MeshComponent](../../concepts/meshes.md#meshcomponent) are intersected by a world-space axis-aligned bounding box (AABB). This check is very performant as the individual check is performed based on each mesh part's bounds in the scene, not on an individual triangle basis. As an optimization, a maximum number of hit mesh components can be provided.\ -While such a query can be run manually on the client side, for large scenes it will be much faster for the server to compute this. +A *spatial query* allows for the runtime to check which [MeshComponents](../../concepts/meshes.md#meshcomponent) intersect with a user defined volume. This check is performant as the individual check is performed based on each mesh part's bounds in the scene, not on an individual triangle basis. As an optimization, a maximum number of hit mesh components can be provided.\ +While such a query can be run manually on the client side, for large scenes it can be orders of magnitude faster for the server to compute this. ++The following example code shows how to do queries against an axis aligned bounding box (AABB). Variants of the query also allow for oriented bounding box volumes (`SpatialQueryObbAsync`) and sphere volumes (`SpatialQuerySphereAsync`). ```cs async void QueryAABB(RenderingSession session) { // Query all mesh components in a 2x2x2m cube.- SpatialQuery query = new SpatialQuery(); + SpatialQueryAabb query = new SpatialQueryAabb(); query.Bounds = new Microsoft.Azure.RemoteRendering.Bounds(new Double3(-1, -1, -1), new Double3(1, 1, 1)); query.MaxResults = 100; - SpatialQueryResult result = await session.Connection.SpatialQueryAsync(query); + SpatialQueryResult result = await session.Connection.SpatialQueryAabbAsync(query); foreach (MeshComponent meshComponent in result.Overlaps) { Entity owner = meshComponent.Owner; async void QueryAABB(RenderingSession session) void QueryAABB(ApiHandle<RenderingSession> session) { // Query all mesh components in a 2x2x2m cube.- SpatialQuery query; + SpatialQueryAabb query; query.Bounds.Min = {-1, -1, -1}; query.Bounds.Max = {1, 1, 1}; query.MaxResults = 100; - session->Connection()->SpatialQueryAsync(query, [](Status status, ApiHandle<SpatialQueryResult> result) + session->Connection()->SpatialQueryAabbAsync(query, [](Status status, ApiHandle<SpatialQueryResult> result) { if (status == Status::OK) { void QueryAABB(ApiHandle<RenderingSession> session) ## API documentation -* [C# RenderingConnection.RayCastQueryAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryasync) -* [C++ RenderingConnection::RayCastQueryAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryasync) +* [C# RenderingConnection.RayCastQueryAabbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryaabbasync) +* [C# RenderingConnection.RayCastQueryObbAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastqueryobbasync) +* [C# RenderingConnection.RayCastQuerySphereAsync()](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.raycastquerysphereasync) +* [C++ RenderingConnection::RayCastQueryAabbAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryaabbasync) +* [C++ RenderingConnection::RayCastQueryObbAsync()](/cpp/api/remote-rendering/renderingconnection#raycastqueryobbasync) +* [C++ RenderingConnection::RayCastQuerySphereAsync()](/cpp/api/remote-rendering/renderingconnection#raycastquerysphereasync) ## Next steps |
search | Cognitive Search Custom Skill Form | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-form.md | - Title: 'Form Recognizer custom skill (C#)'- -description: Learn how to create a Form Recognizer custom skill using C# and Visual Studio. ------ Previously updated : 12/01/2022---# Example: Create a Form Recognizer custom skill --In this Azure Cognitive Search skillset example, you'll learn how to create a Form Recognizer custom skill using C# and Visual Studio. Form Recognizer analyzes documents and extracts key/value pairs and table data. By wrapping Form Recognizer into the [custom skill interface](cognitive-search-custom-skill-interface.md), you can add this capability as a step in an end-to-end enrichment pipeline. The pipeline can then load the documents and do other transformations. --## Prerequisites --- [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) (any edition).-- At least five forms of the same type. You can use sample data provided with this guide.--## Create a Form Recognizer resource ---## Train your model --You'll need to train a Form Recognizer model with your input forms before you use this skill. Follow the [cURL quickstart](/azure/applied-ai-services/form-recognizer/how-to-guides/v2-1-sdk-rest-api) to learn how to train a model. You can use the sample forms provided in that quickstart, or you can use your own data. Once the model is trained, copy its ID value to a secure location. --## Set up the custom skill --This tutorial uses the [AnalyzeForm](https://github.com/Azure-Samples/azure-search-power-skills/tree/main/Vision/AnalyzeForm) project in the [Azure Search Power Skills](https://github.com/Azure-Samples/azure-search-power-skills) GitHub repository. Clone this repository to your local machine and navigate to **Vision/AnalyzeForm/** to access the project. Then open _AnalyzeForm.csproj_ in Visual Studio. This project creates an Azure Function resource that fulfills the [custom skill interface](cognitive-search-custom-skill-interface.md) and can be used for Azure Cognitive Search enrichment. It takes form documents as inputs, and it outputs (as text) the key/value pairs that you specify. --First, add project-level environment variables. Locate the **AnalyzeForm** project on the left pane, right-click it and select **Properties**. In the **Properties** window, click the **Debug** tab and then find the **Environment variables** field. Click **Add** to add the following variables: -* `FORMS_RECOGNIZER_ENDPOINT_URL` with the value set to your endpoint URL. -* `FORMS_RECOGNIZER_API_KEY` with the value set to your subscription key. -* `FORMS_RECOGNIZER_MODEL_ID` with the value set to the ID of the model you trained. -* `FORMS_RECOGNIZER_RETRY_DELAY` with the value set to 1000. This value is the time in milliseconds that the program will wait before retrying the query. -* `FORMS_RECOGNIZER_MAX_ATTEMPTS` with the value set to 100. This value is the number of times the program will query the service while attempting to get a successful response. --Next, open _AnalyzeForm.cs_ and find the `fieldMappings` variable, which references the *field-mappings.json* file. This file (and the variable that references it) defines the list of keys you want to extract from your forms and a custom label for each key. For example, a value of `{ "Address:", "address" }, { "Invoice For:", "recipient" }` means the script will only save the values for the detected `Address:` and `Invoice For:` fields, and it will label those values with `"address"` and `"recipient"`, respectively. --Finally, note the `contentType` variable. This script runs the given Form Recognizer model on remote documents that are referenced by URL, so the content type is `application/json`. If you want to analyze local files by including their byte streams in the HTTP requests, you'll need to change the `contentType` to the appropriate [MIME type](https://developer.mozilla.org/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Complete_list_of_MIME_types) for your file. --## Test the function from Visual Studio --After you've edited your project, save it and set the **AnalyzeForm** project as the startup project in Visual Studio (if it isn't set already). Then press **F5** to run the function in your local environment. Use a REST service like [Postman](https://www.postman.com/) to call the function. --### HTTP request --You'll make the following request to call the function. --```HTTP -POST https://localhost:7071/api/analyze-form -``` --### Request body --Start with the request body template below. --```json -{ - "values": [ - { - "recordId": "record1", - "data": { - "formUrl": "<your-form-url>", - "formSasToken": "<your-sas-token>" - } - } - ] -} -``` --Here you'll need to provide the URL of a form that has the same type as the forms you trained with. For testing purposes, you can use one of your training forms. If you followed the cURL quickstart, your forms will be located in an Azure Blob Storage account. Open Azure Storage Explorer, locate a form file, right-click it, and select **Get Shared Access Signature**. The next dialog window will provide a URL and SAS token. Enter these strings in the `"formUrl"` and `"formSasToken"` fields of your request body, respectively. --> [!div class="mx-imgBorder"] ->  --If you want to analyze a remote document that isn't in Azure Blob Storage, paste its URL in the `"formUrl"` field and leave the `"formSasToken"` field blank. --> [!NOTE] -> When the skill is integrated in a skillset, the URL and token will be provided by Cognitive Search. --### Response --You should see a response similar to the following example: --```json -{ - "values": [ - { - "recordId": "record1", - "data": { - "address": "1111 8th st. Bellevue, WA 99501 ", - "recipient": "Southridge Video 1060 Main St. Atlanta, GA 65024 " - }, - "errors": null, - "warnings": null - } - ] -} -``` --## Publish the function to Azure --When you're satisfied with the function behavior, you can publish it. --1. In the **Solution Explorer** in Visual Studio, right-click the project and select **Publish**. Choose **Create New** > **Publish**. -1. If you haven't already connected Visual Studio to your Azure account, select **Add an account....** -1. Follow the on-screen prompts. Specify a unique name for your app service, the Azure subscription, the resource group, the hosting plan, and the storage account you want to use. You can create a new resource group, a new hosting plan, and a new storage account if you don't already have these. When you're finished, select **Create**. -1. After the deployment is complete, notice the Site URL. This URL is the address of your function app in Azure. Save it to a temporary location. -1. In the [Azure portal](https://portal.azure.com), navigate to the Resource Group, and look for the `AnalyzeForm` Function you published. Under the **Manage** section, you should see Host Keys. Copy the *default* host key and save it to a temporary location. --## Connect to your pipeline --To use this skill in a Cognitive Search pipeline, you'll need to add a skill definition to your skillset. The following JSON block is a sample skill definition (you should update the inputs and outputs to reflect your particular scenario and skillset environment). Replace `AzureFunctionEndpointUrl` with your function URL, and replace `AzureFunctionDefaultHostKey` with your host key. --```json -{ - "description":"Skillset that invokes the Form Recognizer custom skill", - "skills":[ - "[... your existing skills go here]", - { - "@odata.type":"#Microsoft.Skills.Custom.WebApiSkill", - "name":"formrecognizer", - "description":"Extracts fields from a form using a pre-trained form recognition model", - "uri":"[AzureFunctionEndpointUrl]/api/analyze-form?code=[AzureFunctionDefaultHostKey]", - "httpMethod":"POST", - "timeout":"PT30S", - "context":"/document", - "batchSize":1, - "inputs":[ - { - "name":"formUrl", - "source":"/document/metadata_storage_path" - }, - { - "name":"formSasToken", - "source":"/document/metadata_storage_sas_token" - } - ], - "outputs":[ - { - "name":"address", - "targetName":"address" - }, - { - "name":"recipient", - "targetName":"recipient" - } - ] - } - ] -} -``` --## Next steps --In this guide, you created a custom skill from the Azure Form Recognizer service. To learn more about custom skills, see the following resources. --* [Azure Search Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) -* [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md) -* [Define a skillset](cognitive-search-defining-skillset.md) -* [Create a skillset (REST)](/rest/api/searchservice/create-skillset) -* [Map enriched fields](cognitive-search-output-field-mapping.md) |
search | Cognitive Search Custom Skill Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-python.md | - Title: 'Custom skill example (Python)'- -description: For Python developers, learn the tools and techniques for building a custom skill using Azure Functions and Visual Studio Code. Custom skills contain user-defined models or logic that you can add to a skillset for AI-enriched indexing in Azure Cognitive Search. ----- Previously updated : 08/22/2022----# Example: Create a custom skill using Python --In this Azure Cognitive Search skillset example, you'll learn how to create a web API custom skill using Python and Visual Studio Code. The example uses an [Azure Function](https://azure.microsoft.com/services/functions/) that implements the [custom skill interface](cognitive-search-custom-skill-interface.md). --The custom skill is simple by design (it concatenates two strings) so that you can focus on the pattern. Once you succeed with a simple skill, you can branch out with more complex scenarios. --## Prerequisites --+ Review the [custom skill interface](cognitive-search-custom-skill-interface.md) to review the inputs and outputs that a custom skill should implement. --+ Set up your environment. We followed [Quickstart: Create a function in Azure with Python using Visual Studio Code](/azure/python/tutorial-vs-code-serverless-python-01) to set up serverless Azure Function using Visual Studio Code and Python extensions. The quickstart leads you through installation of the following tools and components: -- + [Python 3.75 or later](https://www.python.org/downloads/release/python-375/) - + [Visual Studio Code](https://code.visualstudio.com/) - + [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python) - + [Azure Functions Core Tools](../azure-functions/functions-run-local.md#v2) - + [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) --## Create an Azure Function --This example uses an Azure Function to demonstrate the concept of hosting a web API, but other approaches are possible. As long as you meet the [interface requirements for a cognitive skill](cognitive-search-custom-skill-interface.md), the approach you take is immaterial. Azure Functions, however, make it easy to create a custom skill. --### Create a project for the function --The Azure Functions project template in Visual Studio Code creates a local project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for management, deployment, and sharing of resources. --1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select `Azure Functions: Create new project...`. -1. Choose a directory location for your project workspace and choose **Select**. Don't use a project folder that is already part of another workspace. -1. Select a language for your function app project. For this tutorial, select **Python**. -1. Select the Python version (version 3.7.5 is supported by Azure Functions). -1. Select a template for your project's first function. Select **HTTP trigger** to create an HTTP triggered function in the new function app. -1. Provide a function name. In this case, let's use **Concatenator** -1. Select **Function** as the Authorization level. You'll use a [function access key](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) to call the function's HTTP endpoint. -1. Specify how you would like to open your project. For this step, select **Add to workspace** to create the function app in the current workspace. --Visual Studio Code creates the function app project in a new workspace. This project contains the [host.json](../azure-functions/functions-host-json.md) and [local.settings.json](../azure-functions/functions-develop-local.md#local-settings-file) configuration files, plus any language-specific project files. --A new HTTP triggered function is also created in the **Concatenator** folder of the function app project. Inside it there will be a file called "\_\_init__.py", with this content: --```py -import logging --import azure.functions as func ---def main(req: func.HttpRequest) -> func.HttpResponse: - logging.info('Python HTTP trigger function processed a request.') -- name = req.params.get('name') - if not name: - try: - req_body = req.get_json() - except ValueError: - pass - else: - name = req_body.get('name') -- if name: - return func.HttpResponse(f"Hello {name}!") - else: - return func.HttpResponse( - "Please pass a name on the query string or in the request body", - status_code=400 - ) --``` --Now let's modify that code to follow the [custom skill interface](cognitive-search-custom-skill-interface.md)). Replace the default code with the following content: --```py -import logging -import azure.functions as func -import json --def main(req: func.HttpRequest) -> func.HttpResponse: - logging.info('Python HTTP trigger function processed a request.') -- try: - body = json.dumps(req.get_json()) - except ValueError: - return func.HttpResponse( - "Invalid body", - status_code=400 - ) - - if body: - result = compose_response(body) - return func.HttpResponse(result, mimetype="application/json") - else: - return func.HttpResponse( - "Invalid body", - status_code=400 - ) ---def compose_response(json_data): - values = json.loads(json_data)['values'] - - # Prepare the Output before the loop - results = {} - results["values"] = [] - - for value in values: - output_record = transform_value(value) - if output_record != None: - results["values"].append(output_record) - return json.dumps(results, ensure_ascii=False) --## Perform an operation on a record -def transform_value(value): - try: - recordId = value['recordId'] - except AssertionError as error: - return None -- # Validate the inputs - try: - assert ('data' in value), "'data' field is required." - data = value['data'] - assert ('text1' in data), "'text1' field is required in 'data' object." - assert ('text2' in data), "'text2' field is required in 'data' object." - except AssertionError as error: - return ( - { - "recordId": recordId, - "errors": [ { "message": "Error:" + error.args[0] } ] - }) -- try: - concatenated_string = value['data']['text1'] + " " + value['data']['text2'] - # Here you could do something more interesting with the inputs -- except: - return ( - { - "recordId": recordId, - "errors": [ { "message": "Could not complete operation for record." } ] - }) -- return ({ - "recordId": recordId, - "data": { - "text": concatenated_string - } - }) -``` --The **transform_value** method performs an operation on a single record. You can modify the method to meet your specific needs. Remember to do any necessary input validation and to return any errors and warnings if the operation can't be completed. --### Debug your code locally --Visual Studio Code makes it easy to debug the code. Press 'F5' or go to the **Debug** menu and select **Start Debugging**. --You can set any breakpoints on the code by hitting 'F9' on the line of interest. --Once you started debugging, your function will run locally. You can use a tool like Postman or Fiddler to issue the request to localhost. Note the location of your local endpoint on the Terminal window. --## Create a function app in Azure --When you're satisfied with the function behavior, you can publish it. So far you've been working locally. In this section, you'll create a function app in Azure and then deploy the local project to the app you created. --### Create the app from Visual Studio Code --1. In Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select **Create Function App in Azure**. --1. If you have multiple active subscriptions, select the subscription for this app. --1. Enter a globally unique name for the function app. Type a name that is valid for a URL. --1. Select a runtime stack and choose the language version on which you've been running locally. --1. Select a location for your app. If possible, choose the same region that also hosts your search service. --It takes a few minutes to create the app. When it's ready, you'll see the new app under **Resources** and **Function App** of the active subscription. --### Deploy to Azure --1. Still in Visual Studio Code, press F1 to open the command palette. In the command palette, search for and select **Deploy to Function App...**. --1. Select the function app you created. --1. Confirm that you want to continue, and then select **Deploy**. You can monitor the deployment status in the output window. --1. Switch to the [Azure portal](https://portal.azure.com), navigate to **All Resources**. Search for the function app you deployed using the globally unique name you provided in a previous step. -- > [!TIP] - > You can also right-click the function app in Visual Studio Code and select **Open in Portal**. --1. In the portal, on the left, select **Functions**, and then select the function you created. --1. In the function's overview page, select **Get Function URL** in the command bar the top. This will allow you to copy the URL to call the function. -- :::image type="content" source="media/cognitive-search-custom-skill-python/get-function-url.png" alt-text="Screenshot of the Get Function URL command in Azure portal." border="true"::: --## Test the function in Azure --Using the default host key and URL that you copied, test your function from within Azure portal. --1. On the left, under Developer, select **Code + Test**. --1. Select **Test/Run** in the command bar. --1. For input, use **Post**, the default key, and then paste in the request body: -- ```json - { - "values": [ - { - "recordId": "e1", - "data": - { - "text1": "Hello", - "text2": "World" - } - }, - { - "recordId": "e2", - "data": "This is an invalid input" - } - ] - } - ``` --1. Select **Run**. -- :::image type="content" source="media/cognitive-search-custom-skill-python/test-run-function.png" alt-text="Screenshot of the input specification." border="true"::: --This example should produce the same result you saw previously when running the function in the local environment. --## Add to a skillset --Now that you have a new custom skill, you can add it to your skillset. The example below shows you how to call the skill to concatenate the Title and the Author of the document into a single field, which we call merged_title_author. --Replace `[your-function-url-here]` with the URL of your new Azure Function. --```json -{ - "skills": [ - "[... other existing skills in the skillset are here]", - { - "@odata.type": "#Microsoft.Skills.Custom.WebApiSkill", - "description": "Our new search custom skill", - "uri": "https://[your-function-url-here]", - "context": "/document/merged_content/organizations/*", - "inputs": [ - { - "name": "text1", - "source": "/document/metadata_title" - }, - { - "name": "text2", - "source": "/document/metadata_author" - }, - ], - "outputs": [ - { - "name": "text", - "targetName": "merged_title_author" - } - ] - } - ] -} -``` --Remember to add an "outputFieldMapping" in the indexer definition to send "merged_title_author" to a "fullname" field in the search index. --```json -"outputFieldMappings": [ - { - "sourceFieldName": "/document/content/merged_title_author", - "targetFieldName": "fullname" - } -] -``` --## Next steps --Congratulations! You've created your first custom skill. Now you can follow the same pattern to add your own custom functionality. Select the following links to learn more. --+ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills) -+ [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md) -+ [How to define a skillset](cognitive-search-defining-skillset.md) -+ [Create Skillset (REST)](/rest/api/searchservice/create-skillset) -+ [How to map enriched fields](cognitive-search-output-field-mapping.md) |
search | Cognitive Search Tutorial Aml Custom Skill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-aml-custom-skill.md | - Title: "Example: Create and deploy a custom skill with Azure Machine Learning"- -description: This example demonstrates how to use Azure Machine Learning to build and deploy a custom skill for Azure Cognitive Search's AI enrichment pipeline. ----- Previously updated : 09/25/2020---# Example: Build and deploy a custom skill with Azure Machine Learning --In this example, you will use the [hotel reviews dataset](https://www.kaggle.com/datafiniti/hotel-reviews) (distributed under the Creative Commons license [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-s) using Azure Machine Learning to extract aspect-based sentiment from the reviews. This allows for the assignment of positive and negative sentiment within the same review to be correctly ascribed to identified entities like staff, room, lobby, or pool. --To train the aspect-based sentiment model in Azure Machine Learning, you will be using the [nlp recipes repository](https://github.com/microsoft/nlp-recipes/tree/master/examples/sentiment_analysis/absa). The model will then be deployed as an endpoint on an Azure Kubernetes cluster. Once deployed, the endpoint is added to the enrichment pipeline as an AML skill for use by the Cognitive Search service. --There are two datasets provided. If you wish to train the model yourself, the hotel_reviews_1000.csv file is required. Prefer to skip the training step? Download the hotel_reviews_100.csv. --> [!div class="checklist"] -> * Create an Azure Cognitive Search instance -> * Create an Azure Machine Learning workspace (the search service and workspace should be in the same subscription) -> * Train and deploy a model to an Azure Kubernetes cluster -> * Link an AI enrichment pipeline to the deployed model -> * Ingest output from deployed model as a custom skill --> [!IMPORTANT] -> This skill is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this skill. --## Prerequisites --* Azure subscription - get a [free subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* [Cognitive Search service](./search-get-started-arm.md) -* [Cognitive Services resource](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows) -* [Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json) -* [Azure Machine Learning workspace](../machine-learning/how-to-manage-workspace.md) --## Setup --* Clone or download the contents of [the sample repository](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill). -* Extract contents if the download is a zip file. Make sure the files are read-write. -* While setting up the Azure accounts and services, copy the names and keys to an easily accessed text file. The names and keys will be added to the first cell in the notebook where variables for accessing the Azure services are defined. -* If you are unfamiliar with Azure Machine Learning and its requirements, you will want to review these documents before getting started: - * [Configure a development environment for Azure Machine Learning](../machine-learning/how-to-configure-environment.md) - * [Create and manage Azure Machine Learning workspaces in the Azure portal](../machine-learning/how-to-manage-workspace.md) - * When configuring the development environment for Azure Machine Learning, consider using the [cloud-based compute instance](../machine-learning/v1/how-to-configure-environment-v1.md) for speed and ease in getting started. -* Upload the dataset file to a container in the storage account. The larger file is necessary if you wish to perform the training step in the notebook. If you prefer to skip the training step, the smaller file is recommended. --## Open notebook and connect to Azure services --1. Put all of the required information for the variables that will allow access to the Azure services inside the first cell and run the cell. -1. Running the second cell will confirm that you have connected to the search service for your subscription. -1. Sections 1.1 - 1.5 will create the search service datastore, skillset, index, and indexer. --At this point you can choose to skip the steps to create the training data set and experiment in Azure Machine Learning and skip directly to registering the two models that are provided in the models folder of the GitHub repo. If you skip these steps, in the notebook you will then skip to section 3.5, Write scoring script. This will save time; the data download and upload steps can take up to 30 minutes to complete. --## Creating and training the models --Section 2 has six cells that download the glove embeddings file from the nlp recipes repository. After downloading, the file is then uploaded to the Azure Machine Learning data store. The .zip file is about 2G and it will take some time to perform these tasks. Once uploaded, training data is then extracted and now you are ready to move on to section 3. --## Train the aspect based sentiment model and deploy your endpoint --Section 3 of the notebook will train the models that were created in section 2, register those models and deploy them as an endpoint in an Azure Kubernetes cluster. If you are unfamiliar with Azure Kubernetes, it is highly recommended that you review the following articles before attempting to create an inference cluster: --* [Azure Kubernetes service overview](../aks/intro-kubernetes.md) -* [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../aks/concepts-clusters-workloads.md) -* [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](../aks/quotas-skus-regions.md) --Creating and deploying the inference cluster can take up to 30 minutes. Testing the web service before moving on to the final steps, updating your skillset and running the indexer, is recommended. --## Update the skillset --Section 4 in the notebook has four cells that update the skillset and indexer. Alternatively, you can use the portal to select and apply the new skill to the skillset and then run the indexer to update the search service. --In the portal, go to Skillset and select the Skillset Definition (JSON) link. The portal will display the JSON of your skillset that was created in the first cells of the notebook. To the right of the display there is a dropdown menu where you can select the skill definition template. Select the Azure Machine Learning (AML) template. provide the name of the Azure ML workspace and the endpoint for the model deployed to the inference cluster. The template will be updated with the endpoint uri and key. --> :::image type="content" source="media/cognitive-search-aml-skill/portal-aml-skillset-definition.png" alt-text="Skillset definition template"::: --Copy the skillset template from the window and paste it into the skillset definition on the left. Edit the template to provide the missing values for: --* Name -* Description -* Context -* 'inputs' name and source -* 'outputs' name and targetName --Save the skillset. --After saving the skillset, go to the indexer and select the Indexer Definition (JSON) link. The portal will display the JSON of the indexer that was created in the first cells of the notebook. The output field mappings will need to be updated with additional field mappings to ensure that the indexer can handle and pass them correctly. Save the changes and then select Run. --## Clean up resources --When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources. --You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane. --If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit. --## Next steps --> [!div class="nextstepaction"] -> [Review the custom skill web api](./cognitive-search-custom-skill-web-api.md) -> [Learn more about adding custom skills to the enrichment pipeline](./cognitive-search-custom-skill-interface.md) |
search | Cognitive Search Tutorial Aml Designer Custom Skill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-aml-designer-custom-skill.md | - Title: "Example: Create and deploy a custom skill with Azure Machine Learning designer"- -description: This example demonstrates how to use Azure Machine Learning designer to build and deploy a custom AML skill for Azure Cognitive Search's AI enrichment pipeline. ----- Previously updated : 04/16/2021---# Example: Build and deploy a custom skill with Azure Machine Learning designer --[Azure Machine Learning designer](../machine-learning/concept-designer.md) is an easy to use interactive canvas to create machine learning models for tasks like regression and classification. Invoking the model created by the designer in a Cognitive Search enrichment pipeline requires a few additional steps. In this example, you will create a simple regression model to predict the price of an automobile and invoke the inferencing endpoint as an AML skill. --Follow the [Regression - Automobile Price Prediction (Advanced)](https://github.com/Azure/MachineLearningDesigner/blob/master/articles/samples/regression-automobile-price-prediction-compare-algorithms.md) tutorial in the [examples pipelines & datasets](../machine-learning/concept-designer.md) documentation page to create a model that predicts the price of an automobile given the different features. --> [!IMPORTANT] -> Deploying the model following the real time inferencing process will result in a valid endpoint, but not one that you can use with the AML skill in Cognitive Search. --## Register model and download assets --Once you have a model trained, [register the trained model](../machine-learning/v1/how-to-deploy-model-designer.md) and follow the steps to download all the files in the `trained_model_outputs` folder or download only the `score.py` and `conda_env.yml` files from the models artifacts page. You will edit the scoring script before the model is deployed as a real-time inferencing endpoint. ---## Edit the scoring script for use with Cognitive Search --Cognitive Search enrichment pipelines work on a single document and generate a request that contains the inputs for a single prediction. The downloaded `score.py` accepts a list of records and returns a list of predictions as a serialized JSON string. You will be making two changes to the `score.py` --* Edit the script to work with a single input record, not a list -* Edit the script to return a JSON object with a single property, the predicted price. --Open the downloaded `score.py` and edit the `run(data)` function. The function is currently setup to expect the following input as described in the model's `_samples.json` file. --```json -[ - { - "symboling": 2, - "make": "mitsubishi", - "fuel-type": "gas", - "aspiration": "std", - "num-of-doors": "two", - "body-style": "hatchback", - "drive-wheels": "fwd", - "engine-location": "front", - "wheel-base": 93.7, - "length": 157.3, - "width": 64.4, - "height": 50.8, - "curb-weight": 1944, - "engine-type": "ohc", - "num-of-cylinders": "four", - "engine-size": 92, - "fuel-system": "2bbl", - "bore": 2.97, - "stroke": 3.23, - "compression-ratio": 9.4, - "horsepower": 68.0, - "peak-rpm": 5500.0, - "city-mpg": 31, - "highway-mpg": 38, - "price": 6189.0 - }, - { - "symboling": 0, - "make": "toyota", - "fuel-type": "gas", - "aspiration": "std", - "num-of-doors": "four", - "body-style": "wagon", - "drive-wheels": "fwd", - "engine-location": "front", - "wheel-base": 95.7, - "length": 169.7, - "width": 63.6, - "height": 59.1, - "curb-weight": 2280, - "engine-type": "ohc", - "num-of-cylinders": "four", - "engine-size": 92, - "fuel-system": "2bbl", - "bore": 3.05, - "stroke": 3.03, - "compression-ratio": 9.0, - "horsepower": 62.0, - "peak-rpm": 4800.0, - "city-mpg": 31, - "highway-mpg": 37, - "price": 6918.0 - }, - { - "symboling": 1, - "make": "honda", - "fuel-type": "gas", - "aspiration": "std", - "num-of-doors": "two", - "body-style": "sedan", - "drive-wheels": "fwd", - "engine-location": "front", - "wheel-base": 96.5, - "length": 169.1, - "width": 66.0, - "height": 51.0, - "curb-weight": 2293, - "engine-type": "ohc", - "num-of-cylinders": "four", - "engine-size": 110, - "fuel-system": "2bbl", - "bore": 3.15, - "stroke": 3.58, - "compression-ratio": 9.1, - "horsepower": 100.0, - "peak-rpm": 5500.0, - "city-mpg": 25, - "highway-mpg": 31, - "price": 10345.0 - } -] -``` --Your changes will ensure that the model can accept the input generated by Cognitive Search during indexing, which is a single record. --```json -{ - "symboling": 2, - "make": "mitsubishi", - "fuel-type": "gas", - "aspiration": "std", - "num-of-doors": "two", - "body-style": "hatchback", - "drive-wheels": "fwd", - "engine-location": "front", - "wheel-base": 93.7, - "length": 157.3, - "width": 64.4, - "height": 50.8, - "curb-weight": 1944, - "engine-type": "ohc", - "num-of-cylinders": "four", - "engine-size": 92, - "fuel-system": "2bbl", - "bore": 2.97, - "stroke": 3.23, - "compression-ratio": 9.4, - "horsepower": 68.0, - "peak-rpm": 5500.0, - "city-mpg": 31, - "highway-mpg": 38, - "price": 6189.0 -} -``` --Replace lines 27 through 30 with -```python -- for key, val in data.items(): - input_entry[key].append(decode_nan(val)) -``` -You will also need to edit the output that the script generates from a string to a JSON object. Edit the return statement (line 37) in the original file to: -```python - output = result.data_frame.values.tolist() - return { - "predicted_price": output[0][-1] - } -``` --Here is the updated `run` function with the changes in input format and the predicted output that will accept a single record as an input and return a JSON object with the predicted price. --```python -def run(data): - data = json.loads(data) - input_entry = defaultdict(list) - # data is now a JSON object not a list of JSON objects - for key, val in data.items(): - input_entry[key].append(decode_nan(val)) -- data_frame_directory = create_dfd_from_dict(input_entry, schema_data) - score_module = ScoreModelModule() - result, = score_module.run( - learner=model, - test_data=DataTable.from_dfd(data_frame_directory), - append_or_result_only=True) - #return json.dumps({"result": result.data_frame.values.tolist()}) - output = result.data_frame.values.tolist() - # return the last column of the the first row of the dataframe - return { - "predicted_price": output[0][-1] - } -``` -## Register and deploy the model --With your changes saved, you can now register the model in the portal. Select register model and provide it with a valid name. Choose `Other` for Model Framework, `Custom` for Framework Name and `1.0` for Framework Version. Select the `Upload folder` option and select the folder with the updated `score.py` and `conda_env.yaml`. --Select the model and select on the `Deploy` action. The deployment step assumes you have an AKS inferencing cluster provisioned. Container instances are currently not supported in Cognitive Search. - 1. Provide a valid endpoint name -2. Select the compute type of `Azure Kubernetes Service` -3. Select the compute name for your inference cluster -4. Toggle `enable authentication` to on -5. Select `Key-based authentication` for the type -6. Select the updated `score.py` for `entry script file` -7. Select the `conda_env.yaml` for `conda dependencies file` -8. Select the deploy button to deploy your new endpoint. --## Integrate with Cognitive Search --To integrate the newly created endpoint with Cognitive Search -1. Add a JSON file containing a single automobile record to a blob container -2. Configure a AI enrichment pipeline using the [import data workflow](cognitive-search-quickstart-blob.md). Be sure to select `JSON` as the `parsing mode` -3. On the `Add Enrichments` tab, select a single skill `Extract people names` as a placeholder. -4. Add a new field to the index called `predicted_price` of type `Edm.Double`, set the Retrievable property to true. -5. Complete the import data process --### Add the AML Skill to the skillset --From the list of skillsets, select the skillset you created. You will now edit the skillset to replace the people identification skill with the AML skill to predict prices. -On the Skillset Definition (JSON) tab, select `Azure Machine Learning (AML)` from the skills dropdown. Select the workspace, for the AML skill to discover your endpoint, the workspace and search service need to be in the same Azure subscription. -Select the endpoint that you created earlier in the tutorial. -Validate that the skill is populated with the URI and authentication information as configured when you deployed the endpoint. Copy the skill template and replace the skill in the skillset. -Edit the skill to: -1. Set the name to a valid name -2. Add a description -3. Set degreesOfParallelism to 1 -4. Set the context to `/document` -5. Set the inputs to all the required inputs, see the sample skill definition below -6. Set the outputs to capture the predicted price returned. --```json -{ - "@odata.type": "#Microsoft.Skills.Custom.AmlSkill", - "name": "AMLdemo", - "description": "AML Designer demo", - "context": "/document", - "uri": "Your AML endpoint", - "key": "Your AML endpoint key", - "resourceId": null, - "region": null, - "timeout": "PT30S", - "degreeOfParallelism": 1, - "inputs": [ - { - "name": "symboling", - "source": "/document/symboling" - }, - { - "name": "make", - "source": "/document/make" - }, - { - "name": "fuel-type", - "source": "/document/fuel-type" - }, - { - "name": "aspiration", - "source": "/document/aspiration" - }, - { - "name": "num-of-doors", - "source": "/document/num-of-doors" - }, - { - "name": "body-style", - "source": "/document/body-style" - }, - { - "name": "drive-wheels", - "source": "/document/drive-wheels" - }, - { - "name": "engine-location", - "source": "/document/engine-location" - }, - { - "name": "wheel-base", - "source": "/document/wheel-base" - }, - { - "name": "length", - "source": "/document/length" - }, - { - "name": "width", - "source": "/document/width" - }, - { - "name": "height", - "source": "/document/height" - }, - { - "name": "curb-weight", - "source": "/document/curb-weight" - }, - { - "name": "engine-type", - "source": "/document/engine-type" - }, - { - "name": "num-of-cylinders", - "source": "/document/num-of-cylinders" - }, - { - "name": "engine-size", - "source": "/document/engine-size" - }, - { - "name": "fuel-system", - "source": "/document/fuel-system" - }, - { - "name": "bore", - "source": "/document/bore" - }, - { - "name": "stroke", - "source": "/document/stroke" - }, - { - "name": "compression-ratio", - "source": "/document/compression-ratio" - }, - { - "name": "horsepower", - "source": "/document/horsepower" - }, - { - "name": "peak-rpm", - "source": "/document/peak-rpm" - }, - { - "name": "city-mpg", - "source": "/document/city-mpg" - }, - { - "name": "highway-mpg", - "source": "/document/highway-mpg" - }, - { - "name": "price", - "source": "/document/price" - } - ], - "outputs": [ - { - "name": "predicted_price", - "targetName": "predicted_price" - } - ] - } -``` -### Update the indexer output field mappings --The indexer output field mappings determine what enrichments are saved to the index. Replace the output field mappings section of the indexer with the snippet below: --```json -"outputFieldMappings": [ - { - "sourceFieldName": "/document/predicted_price", - "targetFieldName": "predicted_price" - } - ] -``` --You can now run your indexer and validate that the `predicted_price` property is populated in the index with the result from your AML skill output. --## Next steps --> [!div class="nextstepaction"] -> [Review the custom skill web api](./cognitive-search-custom-skill-web-api.md) --> [Learn more about adding custom skills to the enrichment pipeline](./cognitive-search-custom-skill-interface.md) --> [Learn more about the AML skill](./cognitive-search-tutorial-aml-custom-skill.md) |
search | Samples Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md | Code samples from the Cognitive Search team demonstrate features and workflows. | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Quickstart/v11) | Source code for [Quickstart: Create a search index in Python](search-get-started-python.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. | | [search-website](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. |-| [AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill) | Source code for [Example: Create a custom skill using Python](cognitive-search-custom-skill-python.md). This article demonstrates indexer and skillset integration with deep learning models in Azure Machine Learning. | > [!TIP] > Try the [Samples browser](/samples/browse/?languages=python&products=azure-cognitive-search) to search for Microsoft code samples in GitHub, filtered by product, service, and language. |
search | Search Api Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md | Preview features that transition to general availability are removed from this l | [**speller**](cognitive-search-aml-skill.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Preview REST API](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). | | [**Normalizers**](search-normalizers.md) | Query | Normalizers provide simple text pre-processing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| Use [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview.| | [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |-| [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | AI enrichment (skills) | A new skill type to integrate an inferencing endpoint from Azure Machine Learning. Get started with [this tutorial](cognitive-search-tutorial-aml-custom-skill.md). | Use [Search Preview REST API](/rest/api/searchservice/), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. Also available in the portal, in skillset design, assuming Cognitive Search and Azure ML services are deployed in the same subscription. | +| [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | AI enrichment (skills) | A new skill type to integrate an inferencing endpoint from Azure Machine Learning. | Use [Search Preview REST API](/rest/api/searchservice/), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. Also available in the portal, in skillset design, assuming Cognitive Search and Azure ML services are deployed in the same subscription. | | [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | AI enrichment (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, doesn't change the content. Caching applies only to enriched documents produced by a skillset.| Add this configuration setting using [Create or Update Indexer Preview REST API](/rest/api/searchservice/create-indexer), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. | | [**moreLikeThis**](search-more-like-this.md) | Query | Finds documents that are relevant to a specific document. This feature has been in earlier previews. | Add this query parameter in [Search Documents Preview REST API](/rest/api/searchservice/search-documents) calls, with API versions 2021-04-30-Preview, 2020-06-30-Preview, 2019-05-06-Preview, 2016-09-01-Preview, or 2017-11-11-Preview. | |
sentinel | Connect Microsoft Purview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-purview.md | To disconnect the Azure Information Protection connector: 1. In the **Data connectors** blade, in the search bar, type *Azure Information Protection*. 1. Select **Azure Information Protection**. 1. Below the connector description, select **Open connector page**.-1. Under **Configuration**, select **Disconnect**. +1. Under **Configuration**, select **Connect Azure Information Protection logs**. +1. Select the configured workspace and select **Ok**. ## Known issues and limitations In this article, you learned how to set up the Microsoft Purview Information Pro - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.+- [Use workbooks](monitor-your-data.md) to monitor your data. |
sentinel | Detect Threats Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md | If you see that your query would trigger too many or too frequent alerts, you ca - **Group all events into a single alert** (the default setting). The rule generates a single alert every time it runs, as long as the query returns more results than the specified **alert threshold** above. The alert includes a summary of all the events returned in the results. - - **Trigger an alert for each event**. The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters - by user, hostname, or something else. You can define these parameters in the query. + - **Trigger an alert for each event**. The rule generates a unique alert for each event returned by the query. This is useful if you want events to be displayed individually, or if you want to group them by certain parameters—by user, hostname, or something else. You can define these parameters in the query. Currently the number of alerts a rule can generate is capped at 150. If in a particular rule, **Event grouping** is set to **Trigger an alert for each event**, and the rule's query returns more than 150 events, each of the first 149 events will generate a unique alert, and the 150th alert will summarize the entire set of returned events. In other words, the 150th alert is what would have been generated under the **Group all events into a single alert** option. In the **Alert grouping** section, if you want a single incident to be generated | **Group alerts into a single incident if all the entities match** | Alerts are grouped together if they share identical values for each of the mapped entities (defined in the [Set rule logic](#define-the-rule-query-logic-and-configure-settings) tab above). This is the recommended setting. | | **Group all alerts triggered by this rule into a single incident** | All the alerts generated by this rule are grouped together even if they share no identical values. | | **Group alerts into a single incident if the selected entities and details match** | Alerts are grouped together if they share identical values for all of the mapped entities, alert details, and custom details selected from the respective drop-down lists.<br><br>You might want to use this setting if, for example, you want to create separate incidents based on the source or target IP addresses, or if you want to group alerts that match a specific entity and severity.<br><br>**Note**: When you select this option, you must have at least one entity type or field selected for the rule. Otherwise, the rule validation will fail and the rule won't be created. |- | - **Re-open closed matching incidents**: If an incident has been resolved and closed, and later on another alert is generated that should belong to that incident, set this setting to **Enabled** if you want the closed incident re-opened, and leave as **Disabled** if you want the alert to create a new incident. > [!NOTE]- > **Up to 150 alerts** can be grouped into a single incident. If more than 150 alerts are generated by a rule that groups them into a single incident, a new incident will be generated with the same incident details as the original, and the excess alerts will be grouped into the new incident. + > + > **Up to 150 alerts** can be grouped into a single incident. + > - The incident will only be created after all the alerts have been generated. All of the alerts will be added to the incident immediately upon its creation. + > + > - If more than 150 alerts are generated by a rule that groups them into a single incident, a new incident will be generated with the same incident details as the original, and the excess alerts will be grouped into the new incident. ## Set automated responses and create the rule -1. In the **Automated responses** tab, you can set automation based on the alert or alerts generated by this analytics rule, or based on the incident created by the alerts. +In the **Automated responses** tab, you can use [automation rules](automate-incident-handling-with-automation-rules.md) to set automated responses to occur at any of three types of occasions: +- When an alert is generated by this analytics rule. +- When an incident is created with alerts generated by this analytics rule. +- When an incident is updated with alerts generated by this analytics rule. + +The grid displayed under **Automation rules** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the ellipsis at the end of each row. Or, you can [create a new automation rule](create-manage-use-automation-rules.md). - - For alert-based automation, select from the drop-down list under **Alert automation** any playbooks you want to run automatically when an alert is generated. +Use automation rules to perform basic triage, assignment, [workflow](incident-tasks.md), and closing of incidents. - - For incident-based automation, the grid displayed under **Incident automation** shows the automation rules that already apply to this analytics rule (by virtue of it meeting the conditions defined in those rules). You can edit any of these by selecting the ellipsis at the end of each row. Or, you can [create a new automation rule](create-manage-use-automation-rules.md). - - You can call playbooks (those based on the **incident trigger**) from these automation rules, as well as automate triage, assignment, and closing. +Automate more complex tasks and invoke responses from remote systems to remediate threats by calling playbooks from these automation rules. You can do this for incidents as well as for individual alerts. - - For more information and instructions on creating playbooks and automation rules, see┬á[Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses). +- For more information and instructions on creating playbooks and automation rules, see┬á[Automate threat responses](tutorial-respond-threats-playbook.md#automate-threat-responses). - - For more information about when to use the **alert trigger** or the **incident trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary). +- For more information about when to use the **incident created trigger**, the **incident updated trigger**, or the **alert created trigger**, see [Use triggers and actions in Microsoft Sentinel playbooks](playbook-triggers-actions.md#microsoft-sentinel-triggers-summary). :::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings"::: -1. Select **Review and create** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**. +- Under **Alert automation (classic)** at the bottom of the screen, you'll see any playbooks you've configured to run automatically when an alert is generated using the old method. If you still have any of these, you should instead create an automation rule based on the **alert created trigger** and invoke the playbook from there. ++Select **Review and create** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**. - :::image type="content" source="media/tutorial-detect-threats-custom/review-and-create-tab.png" alt-text="Review all settings and create the rule"::: ## View the rule and its output |
sentinel | Playbook Triggers Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/playbook-triggers-actions.md | Though the Microsoft Sentinel connector can be used in a variety of ways, the co | Trigger (full name in Logic Apps Designer) | When to use it | Known limitations | | -- | -- | -| **Microsoft Sentinel incident (Preview)** | Recommended for most incident automation scenarios.<br><br>The playbook receives incident objects, including entities and alerts. Using this trigger allows the playbook to be attached to an **Automation rule**, so it can be triggered when an incident is created (and now, updated as well) in Microsoft Sentinel, and all the [benefits of automation rules](./automate-incident-handling-with-automation-rules.md) can be applied to the incident. | Playbooks with this trigger do not support alert grouping, meaning they will receive only the first alert sent with each incident. +| **Microsoft Sentinel incident (Preview)** | Recommended for most incident automation scenarios.<br><br>The playbook receives incident objects, including entities and alerts. Using this trigger allows the playbook to be attached to an **Automation rule**, so it can be triggered when an incident is created (and now, updated as well) in Microsoft Sentinel, and all the [benefits of automation rules](./automate-incident-handling-with-automation-rules.md) can be applied to the incident. | Playbooks with this trigger do not support alert grouping, meaning they will receive only the first alert sent with each incident.<br><br>**UPDATE**: As of February 2023, alert grouping is supported for this trigger. | **Microsoft Sentinel alert (Preview)** | Advisable for playbooks that need to be run on alerts manually from the Microsoft Sentinel portal, or for **scheduled** analytics rules that don't generate incidents for their alerts. | This trigger cannot be used to automate responses for alerts generated by **Microsoft security** analytics rules.<br><br>Playbooks using this trigger cannot be called by **automation rules**. | | **Microsoft Sentinel entity (Preview)** | To be used for playbooks that need to be run manually on specific entities from an investigation or threat hunting context. | Playbooks using this trigger cannot be called by **automation rules**. | |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | See these [important announcements](#announcements) about recent changes to feat ## February 2023 +- [New behavior for alert grouping in analytics rules](#new-behavior-for-alert-grouping-in-analytics-rules) (in [Announcements](#announcements) section below) - [Microsoft 365 Defender data connector is now generally available](#microsoft-365-defender-data-connector-is-now-generally-available) - [Advanced scheduling for analytics rules (Preview)](#advanced-scheduling-for-analytics-rules-preview) To give you more flexibility in scheduling your analytics rule execution times a [Learn more about advanced scheduling](detect-threats-custom.md#query-scheduling-and-alert-threshold). + ## January 2023 - [New incident investigation experience (Preview)](#new-incident-investigation-experience-preview) A [new version of the Microsoft Sentinel Logstash plugin](connect-logstash-data- ## Announcements +- [New behavior for alert grouping in analytics rules](#new-behavior-for-alert-grouping-in-analytics-rules) - [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector) - [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table) +### New behavior for alert grouping in analytics rules ++Starting **February 6, 2023** and continuing through the end of February, Microsoft Sentinel is rolling out a change in the way that incidents are created from analytics rules with certain event and alert grouping settings, and also the way that such incidents are updated by automation rules. This change is being made in order to produce incidents with more complete information and to simplify automation triggered by the creating and updating of incidents. ++The affected analytics rules are those with both of the following two settings: +- **Event grouping** is set to **Trigger an alert for each event** (sometimes referred to as "alert per row" or "alert per result"). +- **Alert grouping** is enabled, in any one of the [three possible configurations](detect-threats-custom.md#alert-grouping). ++#### The problem ++Rules with these two settings generate unique alerts for each event (result) returned by the query. These alerts are then all grouped together into a single incident (or a small number of incidents, depending on the alert grouping configuration choice). ++The problem is that the incident is created as soon as the first alert is generated, so at that point the incident contains only the first alert. The remaining alerts are joined to the incident, one after the other, as they are generated. So you end up with a *single running of an analytics rule* resulting in: +- One incident creation event *and* +- Up to 149 incident update events ++These circumstances result in unpredictable behavior when evaluating the conditions defined in automation rules or populating the incident schema in playbooks: ++- **Incorrect evaluation of an incident's conditions by automation rules:** ++ Automation rules on this incident will run immediately on its creation, even with just the one alert included. So the automation rule will only consider the incident's status as containing the first alert, even though other alerts are being created nearly simultaneously (by the same running of the analytics rule) and will continue being added while the automation rule is running. So you end up with a situation where the automation rule's evaluation of the incident is incomplete and likely incorrect. ++ If there are automation rules defined to run when the incident is *updated*, they will run again and again as each subsequent alert is added to the incident (even though the alerts were all generated by the same running of the analytics rule). So you'll have alerts being added and automation rules running, each time possibly incorrectly evaluating the conditions of the incident. ++ Automation rules' conditions might ignore entities that only later become part of the incident but weren't included in the first alert/creation of the incident. ++ In these cases, incorrect evaluation of an incident's condition may cause automation rules to run when they shouldn't, or not to run when they should. The result of this would be that the wrong actions would be taken on an incident, or that the right actions would not be taken. ++- **Information in later alerts being unavailable to playbooks run on the incident:** ++ When an automation rule calls a playbook, it passes the incident's detailed information to the playbook. Because of the behavior mentioned above, a playbook might only receive the details (entities, custom details, and so on) of the first alert in an incident, but not those from subsequent alerts. This means that the playbook's actions would not have access to all the information in the incident. ++#### The solution ++Going forward, instead of creating the incident as soon as the first alert is generated, Microsoft Sentinel will wait until a single running of an analytics rule has generated all of its alerts, and then create the incident, adding all the alerts to it at once. So instead of an incident creation event and a whole bunch of incident update events, you have only the incident creation event. ++Now, automation rules that run on the creation of an incident can evaluate the complete incident with all of its alerts (as well as entities and custom details) and its most updated properties, and any playbooks that run will similarly have the complete details of the incident. ++++The following table describes the change in the incident creation and automation behaviors: ++| When incident created/updated with multiple alerts | Before the change | After the change | +| -- | -- | -- | +| **Automation rule** conditions are evaluated based on... | The first alert generated by the current running of the analytics rule. | All alerts and entities resulting from the current running of the analytics rule. | +| **Playbook input** includes... | - Alerts list containing only the first alert of the incident.<br>- Entities list containing only entities from the first alert of the incident. | - Alerts list containing all the alerts triggered by this rule execution and grouped to this incident.<br>- Entities list containing the entities from all the alerts triggered by this rule execution and grouped to this incident. | +| **SecurityIncident** table in Log Analytics shows... | - One row for *incident created* with one alert.<br>- Multiple events of *alert added*. | One row for *incident created* only after all alerts triggered by this rule execution have been added and grouped to this incident. | + ### Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP) As of **October 24, 2022**, [Microsoft 365 Defender](/microsoft-365/security/defender/) integrates [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents. Customers can choose between three levels of integration: |
storage | Access Tiers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-overview.md | Changing the access tier for a blob when versioning is enabled, or if the blob h - [Set a blob's access tier](access-tiers-online-manage.md) - [Archive a blob](archive-blob.md) - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)+- [Best practices for using blob access tiers](access-tiers-best-practices.md) |
storage | Lifecycle Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md | Unfortunately, there's no way to track the time at which the policy will be exec - [Configure a lifecycle management policy](lifecycle-management-policy-configure.md) - [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md) - [Manage and find data on Azure Blob Storage with blob index](storage-manage-find-blobs.md)+- [Best practices for using blob access tiers](access-tiers-best-practices.md) |
storage | Storage Blob Container Create Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - ## Name a container A container name must be a valid DNS name, as it forms part of the unique URI used to address the container or its blobs. Follow these rules when naming a container: A root container, with the specific name `$root`, enables you to reference a blo The root container must be explicitly created or deleted. It isn't created by default as part of service creation. The same code displayed in the previous section can create the root. The container name is `$root`. -## See also +## Resources ++To learn more about creating a container using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for creating a container use the following REST API operation: ++- [Create Container](/rest/api/storageservices/create-container) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/create-container.js) -- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md)-- [Create Container operation](/rest/api/storageservices/create-container)-- [Delete Container operation](/rest/api/storageservices/delete-container) |
storage | Storage Blob Container Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md | The following example creates the root container synchronously: :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Containers.cs" id="CreateRootContainer"::: -## See also +## Resources -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)-- [Create Container operation](/rest/api/storageservices/create-container)-- [Delete Container operation](/rest/api/storageservices/delete-container)+To learn more about creating a container using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for creating a container use the following REST API operation: ++- [Create Container](/rest/api/storageservices/create-container) (REST API) + |
storage | Storage Blob Container Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - ## Delete a container To delete a container in JavaScript, create a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) or [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object) then use one of the following methods: async function undeleteContainer(blobServiceClient, containerName) { } ``` -## See also +## Resources ++To learn more about deleting a container using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting or restoring a container use the following REST API operations: ++- [Delete Container](/rest/api/storageservices/delete-container) (REST API) +- [Restore Container](/rest/api/storageservices/restore-container) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-containers.js) +++### See also -- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md)-- [Restore Container](/rest/api/storageservices/restore-container) |
storage | Storage Blob Container Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md | public static async Task RestoreContainer(BlobServiceClient client, string conta } ``` -## See also +## Resources ++To learn more about deleting a container using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for deleting or restoring a container use the following REST API operations: ++- [Delete Container](/rest/api/storageservices/delete-container) (REST API) +- [Restore Container](/rest/api/storageservices/restore-container) (REST API) +++### See also -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md)-- [Restore Container](/en-us/rest/api/storageservices/restore-container) |
storage | Storage Blob Container Properties Metadata Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md | To retrieve metadata, [get the container properties](#retrieve-container-propert - [ContainerClient.getProperties](/javascript/api/@azure/storage-blob/containerclient#@azure-storage-blob-containerclient-getproperties) +## Resources -## See also +To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for JavaScript, see the following resources. -- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md)-- [Get Container Properties operation](/rest/api/storageservices/get-container-properties)-- [Set Container Metadata operation](/rest/api/storageservices/set-container-metadata)-- [Get Container Metadata operation](/rest/api/storageservices/get-container-metadata)+### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations: ++- [Get Container Properties](/rest/api/storageservices/get-container-properties) (REST API) +- [Set Container Metadata](/rest/api/storageservices/set-container-metadata) (REST API) +- [Get Container Metadata](/rest/api/storageservices/get-container-metadata) (REST API) ++The `getProperties` method retrieves container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation. ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/container-set-properties-and-metadata.js) + |
storage | Storage Blob Container Properties Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md | Then, read the values, as shown in the example below. :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadContainerMetadata"::: -## See also +## Resources -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)-- [Get Container Properties operation](/rest/api/storageservices/get-container-properties)-- [Set Container Metadata operation](/rest/api/storageservices/set-container-metadata)-- [Get Container Metadata operation](/rest/api/storageservices/get-container-metadata)+To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations: ++- [Get Container Properties](/rest/api/storageservices/get-container-properties) (REST API) +- [Set Container Metadata](/rest/api/storageservices/set-container-metadata) (REST API) +- [Get Container Metadata](/rest/api/storageservices/get-container-metadata) (REST API) ++The `GetProperties` and `GetPropertiesAsync` methods retrieve container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation. + |
storage | Storage Blob Containers List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - ## Understand container listing options To list containers in your storage account, create a [BlobServiceClient](storage-blob-javascript-get-started.md#create-a-blobserviceclient-object) object then call the following method: async function listContainers(blobServiceClient, containerNamePrefix) { } ``` +## Resources ++To learn more about listing containers using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for listing containers use the following REST API operation: ++- [List Containers](/rest/api/storageservices/list-containers2) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-containers.js) ++ ## See also -- [Get started with Azure Blob Storage and JavaScript](storage-blob-dotnet-get-started.md)-- [List Containers](/rest/api/storageservices/list-containers2) - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) |
storage | Storage Blob Containers List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md | The following example asynchronously lists the containers in a storage account t :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Containers.cs" id="Snippet_ListContainers"::: +## Resources ++To learn more about listing containers using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for listing containers use the following REST API operation: ++- [List Containers](/rest/api/storageservices/list-containers2) (REST API) ++ ## See also -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)-- [List Containers](/rest/api/storageservices/list-containers2) - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) |
storage | Storage Blob Copy Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - > [!NOTE] > The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md). async function copyThenAbortBlob( Aborting a copy operation, with [BlobClient.abortCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-abortcopyfromurl) results in a destination blob of zero length. However, the metadata for the destination blob will have the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods. The final blob will be committed when the copy completes. -## See also +## Resources ++To learn more about copying blobs using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for copying blobs use the following REST API operations: ++- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) +- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API) +- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob.js) -- [Copy Blob](/rest/api/storageservices/copy-blob)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob)-- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) |
storage | Storage Blob Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md | The [AbortCopyFromUri](/dotnet/api/azure.storage.blobs.specialized.blobbaseclien :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CopyBlob.cs" id="Snippet_StopBlobCopy"::: -## See also +## Resources -- [Copy Blob](/rest/api/storageservices/copy-blob)-- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob)-- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)+To learn more about copying blobs using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for copying blobs use the following REST API operations: ++- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) +- [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) (REST API) +- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) + |
storage | Storage Blob Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - > [!NOTE] > The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md). async function undeleteBlob(containerClient, blobName){ } ``` -## See also +## Resources ++To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations: -- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) - [Delete Blob](/rest/api/storageservices/delete-blob) (REST API)-- [Soft delete for blobs](soft-delete-blob-overview.md) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/delete-blob.js) +++### See also ++- [Soft delete for blobs](soft-delete-blob-overview.md) +- [Blob versioning](versioning-overview.md) |
storage | Storage Blob Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md | public static void RestoreBlobsWithVersioning(BlobContainerClient container, Blo } ``` -## See also +## Resources ++To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations: -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) - [Delete Blob](/rest/api/storageservices/delete-blob) (REST API)-- [Soft delete for blobs](soft-delete-blob-overview.md) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API)+++### See also ++- [Soft delete for blobs](soft-delete-blob-overview.md) +- [Blob versioning](versioning-overview.md) |
storage | Storage Blob Download Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md | This article shows how to download a blob using the [Azure Storage client librar - [BlobClient.downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1) (only available in Node.js runtime) - [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) (only available in Node.js runtime) -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - > [!NOTE] > The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md). async function streamToBuffer(readableStream) { If you're working with JavaScript in the browser, blob data returns in a promise [blobBody](/javascript/api/@azure/storage-blob/blobdownloadresponseparsed#@azure-storage-blob-blobdownloadresponseparsed-blobbody). To learn more, see the example usage for browsers at [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download). -## See also +## Resources ++To learn more about how to download blobs using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for downloading blobs use the following REST API operation: -- [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md)-- [DownloadStreaming]() - [Get Blob](/rest/api/storageservices/get-blob) (REST API)++### Code samples ++View code samples from this article (GitHub): +- [Download to file](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-file.js) +- [Download to stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-stream.js) +- [Download to string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/download-blob-to-string.js) + |
storage | Storage Blob Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md | public static async Task DownloadfromStream(BlobClient blobClient, string localF ``` -## See also +## Resources ++To learn more about how to download blobs using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for downloading blobs use the following REST API operation: -- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)-- [DownloadStreaming](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadstreaming) / [DownloadStreamingAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadstreamingasync) - [Get Blob](/rest/api/storageservices/get-blob) (REST API)+ |
storage | Storage Blob Properties Metadata Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - ## About properties and metadata - **System properties**: System properties exist on each Blob storage resource. Some of them can be read or set, while others are read-only. Under the covers, some system properties correspond to certain standard HTTP headers. The Azure Storage client library for JavaScript maintains these properties for you. my-blob.txt properties: objectReplicationSourceProperties: ``` +## Resources ++To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations: ++- [Set Blob Properties](/rest/api/storageservices/set-blob-properties) (REST API) +- [Get Blob Properties](/rest/api/storageservices/get-blob-properties) (REST API) +- [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API) +- [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API) ++### Code samples -## See also +- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/blob-set-properties-and-metadata.js) -- [Set Blob Properties operation](/rest/api/storageservices/set-blob-properties)-- [Get Blob Properties operation](/rest/api/storageservices/get-blob-properties)-- [Set Blob Metadata operation](/rest/api/storageservices/set-blob-metadata)-- [Get Blob Metadata operation](/rest/api/storageservices/get-blob-metadata) |
storage | Storage Blob Properties Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md | To retrieve metadata, call the [GetProperties](/dotnet/api/azure.storage.blobs.s :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Metadata.cs" id="Snippet_ReadBlobMetadata"::: -## See also +## Resources -- [Set Blob Properties operation](/rest/api/storageservices/set-blob-properties)-- [Get Blob Properties operation](/rest/api/storageservices/get-blob-properties)-- [Set Blob Metadata operation](/rest/api/storageservices/set-blob-metadata)-- [Get Blob Metadata operation](/rest/api/storageservices/get-blob-metadata)+To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations: ++- [Set Blob Properties](/rest/api/storageservices/set-blob-properties) (REST API) +- [Get Blob Properties](/rest/api/storageservices/get-blob-properties) (REST API) +- [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API) +- [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API) + |
storage | Storage Blob Tags Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md | Blob index tags categorize data in your storage account using key-value tag attr To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md). -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - > [!NOTE] > The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md). And example output for this function shows the matched blobs and their tags, bas |-| |Blob 1: set-tags-1650565920363-query-by-tag-blob-a-1.txt - {"createdOn":"2022-01","owner":"PhillyProject","project":"set-tags-1650565920363"}| -## See also +## Resources ++To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for managing and using blob index tags use the following REST API operations: -- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API)+- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/set-and-retrieve-blob-tags.js) +++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blob Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md | public static async Task FindBlobsbyTags(BlobServiceClient serviceClient) ``` -## See also +## Resources ++To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for managing and using blob index tags use the following REST API operations: -- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Get Blob Tags](/rest/api/storageservices/get-blob-tags) (REST API)+- [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API)+++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blob Upload Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md | -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - > [!NOTE] > The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md). The following example uploads a string to blob storage with the [BlockBlobClient :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js" id="Snippet_UploadBlob" highlight="14"::: -## See also +## Resources ++To learn more about uploading blobs using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for uploading blobs use the following REST API operations: -- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) - [Put Blob](/rest/api/storageservices/put-blob) (REST API) - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)++### Code samples ++View code samples from this article (GitHub): ++- [Upload from local file path](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-local-file-path.js) +- [Upload from buffer](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-buffer.js) +- [Upload from stream](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-stream.js) +- [Upload from string](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string.js) +++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blob Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md | public static async Task UploadInBlocks } ``` -## See also +## Resources ++To learn more about uploading blobs using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for uploading blobs use the following REST API operations: -- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)-- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) - [Put Blob](/rest/api/storageservices/put-blob) (REST API) - [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)++### See also ++- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) +- [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md) |
storage | Storage Blobs List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md | This article shows how to list blobs using the [Azure Storage client library for When you list blobs from your code, you can specify a number of options to manage how results are returned from Azure Storage. You can specify the number of results to return in each set of results, and then retrieve the subsequent sets. You can specify a prefix to return blobs whose names begin with that character or string. And you can list blobs in a flat listing structure, or hierarchically. A hierarchical listing returns blobs as though they were organized into folders. -The [sample code snippets](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide) are available in GitHub as runnable Node.js files. - ## Understand blob listing options To list the blobs in a storage account, create a [ContainerClient](storage-blob-javascript-get-started.md#create-a-containerclient-object) then call one of these methods: Folder /folder2/sub1/ > [!NOTE] > Blob snapshots cannot be listed in a hierarchical listing operation. -## Next steps +## Resources ++To learn more about how to list blobs using the Azure Blob Storage client library for JavaScript, see the following resources. ++### REST API operations ++The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for listing blobs use the following REST API operation: ++- [List Blobs](/rest/api/storageservices/list-blobs) (REST API) ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/list-blobs.js) +++### See also -- [List Blobs](/rest/api/storageservices/list-blobs) - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) - [Blob versioning](versioning-overview.md) |
storage | Storage Blobs List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md | private static void ListBlobVersions(BlobContainerClient blobContainerClient, } ``` -## Next steps +## Resources ++To learn more about how to list blobs using the Azure Blob Storage client library for .NET, see the following resources. ++### REST API operations ++The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for listing blobs use the following REST API operation: ++- [List Blobs](/rest/api/storageservices/list-blobs) (REST API) +++### See also -- [List Blobs](/rest/api/storageservices/list-blobs) - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) - [Blob versioning](versioning-overview.md) |
storage | Storage Files How To Mount Nfs Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md | description: Learn how to mount a Network File System (NFS) Azure file share on Previously updated : 10/21/2022 Last updated : 02/06/2023 -## Limitations +## Support [!INCLUDE [files-nfs-limitations](../../../includes/files-nfs-limitations.md)] Azure file shares can be mounted in Linux distributions using either the Server - Open port 2049 on the client you want to mount your NFS share to. > [!IMPORTANT]- > NFS shares can only be accessed from trusted networks. Connections to your NFS share must originate from one of the following sources: -- Use one of the following networking solutions:- - Either [create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint) (recommended) or [restrict access to your public endpoint](storage-files-networking-endpoints.md#restrict-public-endpoint-access). + > NFS shares can only be accessed from trusted networks. ++- Either [create a private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint) (recommended) or [restrict access to your public endpoint](storage-files-networking-endpoints.md#restrict-public-endpoint-access). +- To enable hybrid access to an NFS Azure file share, use one of the following networking solutions: - [Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md). - [Configure a Site-to-Site VPN for use with Azure Files](storage-files-configure-s2s-vpn.md). - Configure [ExpressRoute](../../expressroute/expressroute-introduction.md). |
synapse-analytics | Apache Spark Cdm Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md | -For information on defining CDM documents using CDM 1.0 see. [What is CDM and how to use it](/common-data-model/). +For information on defining CDM documents using CDM 1.2 see. [What is CDM and how to use it](/common-data-model/). ## High level functionality The following capabilities are supported: * Supports writing data using user modifiable partition patterns. * Supports use of managed identity Synapse and credentials. * Supports resolving CDM aliases locations used in imports using CDM adapter definitions described in a config.json.+* Parallel writes are not supported. It is not recommended. There is no locking mechanism at the storage layer. ## Limitations The following scenarios aren't supported: * Write support for model.json isn't supported. * Executing ```com.microsoft.cdm.BuildInfo.version``` will verify the version -Spark 2.4 and Spark 3.1 are supported. +Spark 2.4, 3.1, and 3.2 are supported. ++## Samples +Checkout the [sample code and CDM files](https://github.com/Azure/spark-cdm-connector/tree/spark3.2/samples) for a quick start. ## Reading data When reading CSV data, the connector uses the Spark FAILFAST option by default. .option("entity", "permissive") or .option("mode", "failfast") ``` -For example, [here's an example Python sample.](https://github.com/Azure/spark-cdm-connector/blob/master/samples/SparkCDMsamplePython.ipynb) - ## Writing data When writing to a CDM folder, if the entity doesn't already exist in the CDM folder, a new entity and definition is created and added to the CDM folder and referenced in the manifest. Two writing modes are supported: SaS Token Credential authentication to storage accounts is an extra option for a | **Option** |**Description** |**Pattern and example usage** | |-||::|-| sasToken |The sastoken to access the relative storageAccount with the correct permissions | \<token\>| +| sasToken |The sastoken to access the relative storageAccount with the correct permissions | \<token\>| ### Credential-based access control options df.write.format("com.microsoft.cdm") .option("manifestPath", "cdmdata/Teams/root.manifest.cdm.json") .option("entity", "TeamMembership") .option("useCdmStandardModelRoot", true)- .option("entityDefinitionPath", "core/applicationCommon/TeamMembership.cdm.json/Tea -mMembership") + .option("entityDefinitionPath", "core/applicationCommon/TeamMembership.cdm.json/TeamMembership") .option("useSubManifest", true) .mode(SaveMode.Overwrite) .save() val df= spark.createDataFrame(spark.sparkContext.parallelize(data, 2), schema) +-- ... ``` -## Samples --See https://github.com/Azure/spark-cdm-connector/tree/master/samples for sample code and CDM files. --### Examples --The following examples all use appId, appKey and tenantId variables initialized earlier in the code based on an Azure app registration that has been given Storage Blob Data Contributor permissions on the storage for write and Storage Blob Data Reader permissions for read. --#### Read --This code reads the Person entity from the CDM folder with manifest in `mystorage.dfs.core.windows.net/cdmdata/contacts/root.manifest.cdm.json`. --```scala -val df = spark.read.format("com.microsoft.cdm") - .option("storage", "mystorage.dfs.core.windows.net") - .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json") - .option("entity", "Person") - .load() -``` --#### Implicit write ΓÇô using dataframe schema only --This code writes the dataframe _df_ to a CDM folder with a manifest to `mystorage.dfs.core.windows.net/cdmdata/Contacts/default.manifest.cdm.json` with an Event entity. --Event data is written as Parquet files, compressed with gzip, that are appended to the folder (new files -are added without deleting existing files). --```scala --df.write.format("com.microsoft.cdm") - .option("storage", "mystorage.dfs.core.windows.net") - .option("manifestPath", "cdmdata/Contacts/default.manifest.cdm.json") - .option("entity", "Event") - .option("format", "parquet") - .option("compression", "gzip") - .mode(SaveMode.Append) - .save() -``` --#### Explicit write - using an entity definition stored in ADLS --This code writes the dataframe _df_ to a CDM folder with manifest at -`https://mystorage.dfs.core.windows.net/cdmdata/Contacts/root.manifest.cdm.json` with the entity Person. Person data is written as new CSV files (by default) which overwrite existing files in the folder. -The Person entity definition is retrieved from -`https://mystorage.dfs.core.windows.net/models/cdmmodels/core/Contacts/Person.cdm.json` --```scala -df.write.format("com.microsoft.cdm") - .option("storage", "mystorage.dfs.core.windows.net") - .option("manifestPath", "cdmdata/contacts/root.manifest.cdm.json") - .option("entity", "Person") - .option("entityDefinitionModelRoot", "cdmmodels/core") - .option("entityDefinitionPath", "/Contacts/Person.cdm.json/Person") - .mode(SaveMode.Overwrite) - .save() -``` --#### Explicit write - using an entity defined in the CDM GitHub --This code writes the dataframe _df_ to a CDM folder with the manifest at `https://_mystorage_.dfs.core.windows.net/cdmdata/Teams/root.manifest.cdm.json` and a submanifest containing the TeamMembership entity, created in a TeamMembership subdirectory. TeamMembership data is written to CSV files (the default) that overwrite any existing data files. The TeamMembership entity definition is retrieved from the CDM CDN, at: -[https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json](https://cdm-schema.microsoft.com/logical/core/applicationCommon/TeamMembership.cdm.json) --```scala -df.write.format("com.microsoft.cdm") - .option("storage", "mystorage.dfs.core.windows.net") - .option("manifestPath", "cdmdata/Teams/root.manifest.cdm.json") - .option("entity", "TeamMembership") - .option("useCdmStandardModelRoot", true) - .option("entityDefinitionPath", "core/applicationCommon/TeamMembership.cdm.json/Tea -mMembership") - .option("useSubManifest", true) - .mode(SaveMode.Overwrite) - .save() -``` --### Other considerations --#### Spark to CDM datatype mapping --The following datatype mappings are applied when converting CDM to/from Spark. --|**Spark** |**CDM**| -||| -|ShortType|SmallInteger| -|IntegerType|Integer| -|LongType |BigInteger| -|DateType |Date| -|Timestamp|DateTime (optionally Time, see below)| -|StringType|String| -|DoubleType|Double| -|DecimalType(x,y)|Decimal (x,y) (default scale and precision are 18,4)| -|FloatType|Float| -|BooleanType|Boolean| -|ByteType|Byte| --The CDM Binary datatype isn't supported. - ## Troubleshooting and known issues * Ensure the decimal precision and scale of decimal data type fields used in the dataframe match the data type used in the CDM entity definition - requires precision and scale traits are defined on the data type. If the precision and scale aren't defined explicitly in CDM, the default used is Decimal(18,4). For model.json files, Decimal is assumed to be Decimal(18,4). |
update-center | Manage Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md | Last updated 01/16/2023 -# Manage workbooks in update management center (preview) +# Create reports in update management center (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. |
update-center | Sample Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/sample-query-logs.md | patchassessmentresources | extend prop = parse_json(properties) | extend lastTime = properties.lastModifiedDateTime | extend updateRollupCount = prop.availablePatchCountByClassification.updateRollup, featurePackCount = prop.availablePatchCountByClassification.featurePack, servicePackCount = prop.availablePatchCountByClassification.servicePack, definitionCount = prop.availablePatchCountByClassification.definition, securityCount = prop.availablePatchCountByClassification.security, criticalCount = prop.availablePatchCountByClassification.critical, updatesCount = prop.availablePatchCountByClassification.updates, toolsCount = prop.availablePatchCountByClassification.tools, otherCount = prop.availablePatchCountByClassification.other, OS = prop.osType-| project lastTime, Id, OS, updateRollupCount, featurePackCount, servicePackCount, definitionCount, securityCount, criticalCount, updatesCount, toolsCount, otherCount +| project lastTime, id, OS, updateRollupCount, featurePackCount, servicePackCount, definitionCount, securityCount, criticalCount, updatesCount, toolsCount, otherCount ``` ## Count of update installations |
virtual-desktop | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md | Title: What's new in Azure Virtual Desktop? - Azure description: New features and product updates for Azure Virtual Desktop. Previously updated : 01/13/2023 Last updated : 02/06/2023 Azure Virtual Desktop updates regularly. This article is where you'll find out a Make sure to check back here often to keep up with new updates. +## January 2023 ++Here's what changed in January 2023: ++### Watermarking for Azure Virtual Desktop now in public preview ++Watermarking for Azure Virtual Desktop is now in public preview for the Windows Desktop client. This feature protects sensitive information from being captured on client endpoints by adding watermarks to remote desktops. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-for-watermarking-on-azure-virtual/ba-p/3730264) or [Watermarking in Azure Virtual Desktop](watermarking.md). + +### Give or Take Control for macOS Teams on Azure Virtual Desktop now generally available ++Version 1.31.2211.15001 of the WebRTC Redirector service includes support for Give or Take Control for macOS users. This version includes performance improvements for Give or Take Control on Windows. For more information, see [Updates for version 1.31.2211.15001](whats-new-webrtc.md#updates-for-version-131221115001). ++### Microsoft Teams application window sharing on Azure Virtual Desktop now generally available ++Previously, users could only share their full desktop windows or a Microsoft PowerPoint Live presentation during Teams calls. With application window sharing, users can now choose a specific window to share from their desktop screen and help reduce the risk of displaying sensitive content during meetings or calls. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-application-window-sharing-is-now-generally/ba-p/3719595). ++### Windows 7 End of Support ++Starting January 10, 2023, Azure Virtual Desktop no longer supports Windows 7 as a client or host. We recommend upgrading to a supported Windows release. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/avd-support-for-windows-7-ended-on-january-10th-2023/m-p/3715785). + ## December 2022 Here's what changed in December 2022: |
virtual-machines | Disk Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md | Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 09/03/2021 Last updated : 02/06/2023 To enable end-to-end encryption using encryption at host, see our articles cover High security sensitive customers who are concerned of the risk associated with any particular encryption algorithm, implementation, or key being compromised can now opt for additional layer of encryption using a different encryption algorithm/mode at the infrastructure layer using platform managed encryption keys. This new layer can be applied to persisted OS and data disks, snapshots, and images, all of which will be encrypted at rest with double encryption. +### Restrictions ++Double encryption at rest isn't currently supported with either Ultra Disks or Premium SSD v2 disks. + ### Supported regions Double encryption is available in all regions that managed disks are available. |
virtual-machines | Disks Deploy Premium V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md | Title: Deploy a Premium SSD v2 managed disk description: Learn how to deploy a Premium SSD v2. Previously updated : 12/14/2022 Last updated : 02/06/2023 Azure Premium SSD v2 is designed for IO-intense enterprise workloads that requir ## Prerequisites -- [Sign up](https://aka.ms/PremiumSSDv2AccessRequest) for access to Premium SSD v2. - Install either the latest [Azure CLI](/cli/azure/install-azure-cli) or the latest [Azure PowerShell module](/powershell/azure/install-az-ps). ## Determine region availability programmatically |
virtual-machines | Disks Enable Double Encryption At Rest Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-double-encryption-at-rest-portal.md | Title: Enable double encryption at rest - Azure portal - managed disks description: Enable double encryption at rest for your managed disk data using the Azure portal. Previously updated : 01/19/2023 Last updated : 02/06/2023 +## Restrictions ++Double encryption at rest isn't currently supported with either Ultra Disks or Premium SSD v2 disks. + ## Getting started 1. Sign in to the [Azure portal](https://portal.azure.com). |
virtual-machines | Disks Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md | Title: Select a disk type for Azure IaaS VMs - managed disks description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 10/12/2022 Last updated : 02/06/2023 |
virtual-machines | Disks Enable Double Encryption At Rest Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-double-encryption-at-rest-cli.md | Title: Enable double encryption at rest - Azure CLI - managed disks description: Enable double encryption at rest for your managed disk data using the Azure CLI. Previously updated : 01/20/2023 Last updated : 02/06/2023 +## Restrictions ++Double encryption at rest isn't currently supported with either Ultra Disks or Premium SSD v2 disks. + ## Prerequisites Install the latest [Azure CLI](/cli/azure/install-az-cli2) and sign in to an Azure account with [az login](/cli/azure/reference-index). |
virtual-machines | Using Cloud Init | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md | Once the VM has been provisioned, cloud-init will run through all the modules an > [!NOTE] > Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the `runcmd` module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed. -For more details of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/topics/logging.html) +For more details of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html) ## Telemetry cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected doesn't include any identifiers (personal identifiers). Read our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this isn't an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log. |
virtual-machines | Disks Enable Double Encryption At Rest Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-double-encryption-at-rest-powershell.md | Title: Azure PowerShell - Enable double encryption at rest - managed disks description: Enable double encryption at rest for your managed disk data using Azure PowerShell. Previously updated : 01/20/2023 Last updated : 02/06/2023 +## Restrictions ++Double encryption at rest isn't currently supported with either Ultra Disks or Premium SSD v2 disks. + ## Prerequisites Install the latest [Azure PowerShell version](/powershell/azure/install-az-ps), and sign in to an Azure account using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount). |
virtual-network | Service Tags Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md | By default, service tags reflect the ranges for the entire cloud. Some service t | **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No | | **LogicApps** | Logic Apps. | Both | No | No | | **LogicAppsManagement** | Management traffic for Logic Apps. | Inbound | No | No |+| **Marketplace** | Represents the entire suite of Azure 'Commercial Marketplace Experiences' services. | Both | No | Yes | | **M365ManagementActivityApi** | The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | No | | **M365ManagementActivityApiWebhook** | Notifications are sent to the configured webhook for a subscription as new content becomes available. | Inbound | Yes | No | | **MicrosoftAzureFluidRelay** | This tag represents the IP addresses used for Azure Microsoft Fluid Relay Server. | Outbound | No | No | |