Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-domain-services | Migrate From Classic Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md | Before you begin the migration process, complete the following initial checks an Make sure that network settings don't block necessary ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups. - Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain. To view the ports required, see [Network security groups and required ports][network-ports]. + Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain. - If you use secure LDAP, add a rule to the network security group to allow incoming traffic for *TCP* port *636*. For more information, see [Lock down secure LDAP access over the internet](tutorial-configure-ldaps.md#lock-down-secure-ldap-access-over-the-internet) + The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet your managed domain is deployed into. ++ | Inbound port number | Protocol | Source | Destination | Action | Required | Purpose | + |:--:|:--:|:-:|:--:|::|:--:|:--| + | 5986 | TCP | AzureActiveDirectoryDomainServices | Any | Allow | Yes | Management of your domain. | + | 3389 | TCP | CorpNetSaw | Any | Allow | Optional | Debugging for support. | + | 636 | TCP | AzureActiveDirectoryDomainServices | Inbound | Allow | Optional | Secure LDAP. | Make a note of this target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process. |
active-directory-domain-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md | Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
active-directory | How To Mfa Additional Context | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md | Additional context isn't supported for Network Policy Server (NPS). ## Next steps [Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)- |
active-directory | How To Mfa Number Match | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md | Number matching isn't supported for Apple Watch notifications. Apple Watch need ## Next steps -[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md) +[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md) |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | The fraud alert feature lets users report fraudulent attempts to access their re The following fraud alert configuration options are available: * **Automatically block users who report fraud**. If a user reports fraud, the Azure AD Multi-Factor Authentication attempts for the user account are blocked for 90 days or until an administrator unblocks the account. An administrator can review sign-ins by using the sign-in report, and take appropriate action to prevent future fraud. An administrator can then [unblock](#unblock-a-user) the user's account.-* **Code to report fraud during initial greeting**. When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it. +* **Code to report fraud during initial greeting**. When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it. If automatic blocking is enabled, after the user presses **0#** to report fraud, they need to press **1** to confirm the account blocking. > [!NOTE] > The default voice greetings from Microsoft instruct users to press **0#** to submit a fraud alert. If you want to use a code other than **0**, record and upload your own custom voice greetings with appropriate instructions for your users. |
active-directory | Howto Conditional Access Policy Risk User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md | After confirming your settings using [report-only mode](howto-conditional-access ## Next steps +[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md) + [Conditional Access common policies](concept-conditional-access-policy-common.md) [Sign-in risk-based Conditional Access](howto-conditional-access-policy-risk.md) |
active-directory | Howto Conditional Access Policy Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md | After confirming your settings using [report-only mode](howto-conditional-access ## Next steps +[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md) + [Conditional Access common policies](concept-conditional-access-policy-common.md) [User risk-based Conditional Access](howto-conditional-access-policy-risk-user.md) |
active-directory | Troubleshoot Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md | To find out which Conditional Access policy or policies applied and why do the f 1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing. 1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event. -### Policy details +### Policy not working as intended Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This option gives administrators additional information about why a policy was successfully applied or not. Selecting the ellipsis on the right side of the policy in a sign-in event brings The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured. -If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md). +If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md). You can also [use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md). If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about. -### Conditional Access error codes +### Common Conditional Access error codes | Sign-in Error Code | Error String | | | | If you need to submit a support incident, provide the request ID and time and da | 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk | +More information about error codes can be found in the article [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md). Error codes in the list appear with a prefix of `AADSTS` followed by the code seen in the browser, for example `AADSTS53002`. + ## Service dependencies In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources that are blocked by Conditional Access policy. -To determine the service dependency, check the sign-ins log for the Application and Resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy. +To determine the service dependency, check the sign-ins log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy. :::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png"::: If you're locked out of the Azure portal due to an incorrect setting in a Condit ## Next steps +- [Use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md) - [Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md) - [Troubleshooting Conditional Access using the What If tool](troubleshoot-conditional-access-what-if.md) |
active-directory | What If Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md | |
active-directory | Sample V2 Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md | The following samples show public client desktop applications that access the Mi ## Mobile -The following samples show public client mobile applications that access the Microsoft Graph API, or your own web API in the name of the user. These client applications use the Microsoft Authentication Library (MSAL). +The following samples show public client mobile applications that access the Microsoft Graph API. These client applications use the Microsoft Authentication Library (MSAL). > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow | |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md | Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without extra configuration. +>[!NOTE] +>When you migrate from federated to cloud authentication, the process to convert the domain from federated to managed may take up to 60 minutes. During this process, users might not be prompted for credentials for any new logins to Azure portal or other browser based applications protected with Azure AD. We recommend that you include this delay in your maintenance window. + ### Plan for rollback > [!TIP] |
active-directory | Concept Identity Protection Risks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md | Risk can be detected at the **User** and **Sign-in** level and two types of dete A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Risky activity can be detected for a user that isn't linked to a specific malicious sign-in but to the user itself. -Real-time detections may not show up in reporting for five to 10 minutes. Offline detections may not show up in reporting for 48 hours. +Real-time detections may not show up in reporting for 5 to 10 minutes. Offline detections may not show up in reporting for 48 hours. > [!NOTE] -> Our system may detect that the risk event that contributed to the risk user risk score was a false positives or the user risk was remediated with policy enforcement such as completing multi-factor authentication or secure password change. Therefore our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will surface and it will no longer contribute to the userΓÇÖs risk. +> Our system may detect that the risk event that contributed to the risk user risk score was either: +> +> - A false positive +> - The [user risk was remediated](howto-identity-protection-remediate-unblock.md) by policy by either: +> - Completing multifactor authentication +> - Secure password change. +> +> Our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will show and no longer contribute to the userΓÇÖs overall risk. ### Premium detections -Premium detections are visible only to Azure AD Premium P2 customers. Customers without Azure AD Premium P2 licenses still receives the premium detections but they'll be titled "additional risk detected". -+Premium detections are visible only to Azure AD Premium P2 customers. Customers without Azure AD Premium P2 licenses still receive the premium detections but they'll be titled "additional risk detected". ### Sign-in risk Premium detections are visible only to Azure AD Premium P2 customers. Customers | Risk detection | Detection type | Description | | | | |-| Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. | -| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. <br><br> **NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this as an indicator of potential token replay. | +| Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. The algorithm takes into account multiple factors including the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second. This risk may indicate that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. | +| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. <br><br> **NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this risk as an indicator of potential token replay. | | Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. |-| Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.| +| Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection matches the IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.| | Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. | | Unfamiliar sign-in properties | Real-time |This risk detection type considers past sign-in history to look for anomalous sign-ins. The system stores information about previous sign-ins, and triggers a risk detection when a sign-in occurs with properties that are unfamiliar to the user. These properties can include IP, ASN, location, device, browser, and tenant IP subnet. Newly created users will be in "learning mode" period where the unfamiliar sign-in properties risk detection will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols don't have modern properties such as client ID, there's limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. |-| Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. | +| Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection looks at your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate: a user's account is compromised, messages are being intentionally hidden, and the mailbox is being used to distribute spam or malware in your organization. | | Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been successfully performed. For example, the attacker is successfully authenticated, in the detected instance. |-| Impossible travel | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies two user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials. | +| Impossible travel | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it takes to travel from the first location to the second. This risk may indicate that a different user is using the same credentials. | | New country | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization. | | Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. | | Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. |-| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection profiles your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information| +| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection looks at your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information| #### Nonpremium sign-in risk detections | Risk detection | Detection type | Description | | | | | | Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |-| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, and so on) for potentially malicious intent. | +| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their sign-in information (IP address, location, device, and so on) for potentially malicious intent. | | Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). |-| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | +| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. | ### User-linked detections Premium detections are visible only to Azure AD Premium P2 customers. Customers | Risk detection | Detection type | Description | | | | | | Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |-| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior for users is assessed for anomalies based on an action or sequence of actions occurring for the account, along with any sign-in risk detected. | +| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior of users is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. | #### Nonpremium user risk detections Premium detections are visible only to Azure AD Premium P2 customers. Customers | | | | | Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. | | Leaked credentials | Offline | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they're checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |-| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | +| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. | ## Common questions Premium detections are visible only to Azure AD Premium P2 customers. Customers Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised. -While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user. +Microsoft doesn't provide specific details about how risk is calculated. Each level of risk brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user. ### Password hash synchronization Risk detections like leaked credentials require the presence of password hashes ### Why are there risk detections generated for disabled user accounts? -Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. That is why, Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts. +Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts. ### Leaked credentials Microsoft finds leaked credentials in various places, including: Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs isn't done. -#### I have not seen any leaked credential risk events for quite some time? +#### I haven't seen any leaked credential risk events for quite some time? If you haven't seen any leaked credential risk events, it is because of the following reasons: |
active-directory | Managed Identities Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md | The following Azure services support managed identities for Azure resources: | Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure IoT Hub | [IoT Hub support for virtual networks with Private Link and Managed Identity](../../iot-hub/virtual-network-support.md) | | Azure Kubernetes Service (AKS) | [Use managed identities in Azure Kubernetes Service](../../aks/use-managed-identity.md) |+| Azure Load Testing | [Use managed identities for Azure Load Testing](../../load-testing/how-to-use-a-managed-identity.md) | | Azure Logic Apps | [Authenticate access to Azure resources using managed identities in Azure Logic Apps](../../logic-apps/create-managed-service-identity.md) | | Azure Log Analytics cluster | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md) | Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md?tabs=python) | |
active-directory | Ideagen Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Ideagen Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Ideagen Cloud to support provisioning with Azure AD-1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu. +1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu.  -2. Navigate to **Authentication** page under the **Manage tenant** sub menu. +1. Navigate to **Authentication** page under the **Manage tenant** sub menu.  -3. Scroll down in the Authentication page to **Client Token** section and click on **Regenerate**. +1. Click on Edit button and select **Enabled** checkbox under automatic provisioning. ++  ++1. Click on **Save** button to save the changes. ++1. Scroll down in the Authentication Page to **Client Token** section and click on **Regenerate** .  -4. **Copy** and save the Bearer Token. This value will be entered in the Secret Token * field in the Provisioning tab of your Ideagen Cloud application in the Azure portal. +1. **Copy** and save the Bearer Token. This value will be entered in the Secret Token * field in the Provisioning tab of your Ideagen Cloud application in the Azure portal.  +1. Locate the **SCIM URL** and keep the value for later use. This value will be used as Tenant URL when configuring automatic user provisioning in Azure portal. + ## Step 3. Add Ideagen Cloud from the Azure AD application gallery Add Ideagen Cloud from the Azure AD application gallery to start managing provisioning to Ideagen Cloud. If you have previously setup Ideagen Cloud for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). |
aks | Custom Certificate Authority | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md | Title: Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview) description: Learn how to use a custom certificate authority (CA) in an Azure Kubernetes Service (AKS) cluster. --++ Last updated 4/12/2022 |
aks | Dapr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md | -By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster, you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments. +[By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md), you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments. > [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page. Azure + open source components are supported. Alpha and beta components are supp ### Clouds/regions -Global Azure cloud is supported with Arc support on the regions listed by [Azure Products by Region][supported-cloud-regions]. +Global Azure cloud is supported with Arc support on the following regions: ++| Region | AKS support | Arc for Kubernetes support | +| | -- | -- | +| `australiaeast` | :heavy_check_mark: | :heavy_check_mark: | +| `australiasoutheast` | :heavy_check_mark: | :x: | +| `canadacentral` | :heavy_check_mark: | :heavy_check_mark: | +| `canadaeast` | :heavy_check_mark: | :heavy_check_mark: | +| `centralindia` | :heavy_check_mark: | :heavy_check_mark: | +| `centralus` | :heavy_check_mark: | :heavy_check_mark: | +| `eastasia` | :heavy_check_mark: | :heavy_check_mark: | +| `eastus` | :heavy_check_mark: | :heavy_check_mark: | +| `eastus2` | :heavy_check_mark: | :heavy_check_mark: | +| `eastus2euap` | :x: | :heavy_check_mark: | +| `francecentral` | :heavy_check_mark: | :heavy_check_mark: | +| `germanywestcentral` | :heavy_check_mark: | :heavy_check_mark: | +| `japaneast` | :heavy_check_mark: | :heavy_check_mark: | +| `koreacentral` | :heavy_check_mark: | :heavy_check_mark: | +| `northcentralus` | :heavy_check_mark: | :heavy_check_mark: | +| `northeurope` | :heavy_check_mark: | :heavy_check_mark: | +| `norwayeast` | :heavy_check_mark: | :x: | +| `southafricanorth` | :heavy_check_mark: | :x: | +| `southcentralus` | :heavy_check_mark: | :heavy_check_mark: | +| `southeastasia` | :heavy_check_mark: | :heavy_check_mark: | +| `swedencentral` | :heavy_check_mark: | :heavy_check_mark: | +| `switzerlandnorth` | :heavy_check_mark: | :heavy_check_mark: | +| `uksouth` | :heavy_check_mark: | :heavy_check_mark: | +| `westcentralus` | :heavy_check_mark: | :heavy_check_mark: | +| `westeurope` | :heavy_check_mark: | :heavy_check_mark: | +| `westus` | :heavy_check_mark: | :heavy_check_mark: | +| `westus2` | :heavy_check_mark: | :heavy_check_mark: | +| `westus3` | :heavy_check_mark: | :heavy_check_mark: | + ## Prerequisites |
aks | Enable Fips Nodes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md | Title: Enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools description: Learn how to enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools.--++ Last updated 07/19/2022 |
aks | Ingress Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md | Title: Create an ingress controller in Azure Kubernetes Service (AKS) description: Learn how to create and configure an ingress controller in an Azure Kubernetes Service (AKS) cluster.--++ Last updated 05/17/2022 |
aks | Ingress Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md | Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS) description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster. --++ Last updated 05/18/2022 |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
aks | Use Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md | Title: Use labels in an Azure Kubernetes Service (AKS) cluster description: Learn how to use labels in an Azure Kubernetes Service (AKS) cluster.--++ Last updated 03/03/2022 |
analysis-services | Analysis Services Datasource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md | Data sources and connectors shown in Get Data or Table Import Wizard in Visual S ## Other data sources +Connecting to on-premises data sources from an Azure Analysis Services server require an [On-premises gateway](analysis-services-gateway.md). When using a gateway, 64-bit providers are required. + |Data source | In-memory | DirectQuery |Notes | | | | | | |Access Database | Yes | No | | Data sources and connectors shown in Get Data or Table Import Wizard in Visual S |Analysis Services | Yes | No | | |Analytics Platform System | Yes | No | | |CSV file |Yes | No | |-|Dynamics 365 | Yes | No | <sup>[6](#tab1400b)</sup> | +|Dynamics 365 | Yes | No | <sup>[6](#tab1400b)</sup>, <sup>[12](#tds)</sup> | |Excel workbook | Yes | No | | |Exchange | Yes | No | <sup>[6](#tab1400b)</sup> | |Folder |Yes | No | <sup>[6](#tab1400b)</sup> | Data sources and connectors shown in Get Data or Table Import Wizard in Visual S <a name="instgw">8</a> - If specifying MSOLEDBSQL as the data provider, it may be necessary to download and install the [Microsoft OLE DB Driver for SQL Server](/sql/connect/oledb/oledb-driver-for-sql-server) on the same computer as the On-premises data gateway. <a name="oracle">9</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Oracle Data Provider for .NET. If specified as a structured data source, be sure to [enable Oracle managed provider](#enable-oracle-managed-provider). <a name="teradata">10</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Teradata Data Provider for .NET. -<a name="filesSP">11</a> - Files in on-premises SharePoint are not supported. --Connecting to on-premises data sources from an Azure Analysis Services server require an [On-premises gateway](analysis-services-gateway.md). When using a gateway, 64-bit providers are required. +<a name="filesSP">11</a> - Files in on-premises SharePoint are not supported. +<a name="tds">12</a> - Azure Analysis Services does not support direct connections to the Dynamics 365 [Dataverse TDS endpoint](/power-apps/developer/data-platform/dataverse-sql-query). When connecting to this data source from Azure Analysis Services, you must use an On-premises Data Gateway, and refresh the tokens manually. ## Understanding providers |
api-management | Api Management Howto Disaster Recovery Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md | Title: Implement disaster recovery using backup and restore in API Management + Title: Backup and restore your Azure API Management instance for disaster recovery -description: Learn how to use backup and restore to perform disaster recovery in Azure API Management. +description: Learn how to use backup and restore operations in Azure API Management to carry out your disaster recovery strategy. Previously updated : 10/03/2021- Last updated : 07/27/2022+ -To recover from availability problems that affect the region that hosts your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The service backup and restore features provide the necessary building blocks for implementing disaster recovery strategy. +To recover from availability problems that affect your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The API management backup and restore capabilities provide the necessary building blocks for implementing disaster recovery strategy. Backup and restore operations can also be used for replicating API Management service configuration between operational environments, for example, development and staging. Beware that runtime data such as users and subscriptions will be copied as well, which might not always be desirable. -This guide shows how to automate backup and restore operations and how to ensure successful authenticating of backup and restore requests by Azure Resource Manager. +This article shows how to automate backup and restore operations of your API Management instance using an external storage account. The steps shown here use either the [Backup-AzApiManagement](/powershell/module/az.apimanagement/backup-azapimanagement) and [Restore-AzApiManagement](/powershell/module/az.apimanagement/restore-azapimanagement) Azure PowerShell cmdlets, or the [Api Management Service - Backup](/rest/api/apimanagement/current-ga/api-management-service/backup) and [Api Management Service - Restore](/rest/api/apimanagement/current-ga/api-management-service/restore) REST APIs. -> [!IMPORTANT] -> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change. -> -> Backup operation does not capture pre-aggregated log data used in reports shown on the **Analytics** blade in the Azure portal. > [!WARNING] > Each backup expires after 30 days. If you attempt to restore a backup after the 30-day expiration period has expired, the restore will fail with a `Cannot restore: backup expired` message. +> [!IMPORTANT] +> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change. ++ [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] [!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] -## Authenticating Azure Resource Manager requests +## Prerequisites -> [!IMPORTANT] -> The REST API for backup and restore uses Azure Resource Manager and has a different authentication mechanism than the REST APIs for managing your API Management entities. The steps in this section describe how to authenticate Azure Resource Manager requests. For more information, see [Authenticating Azure Resource Manager requests](/rest/api/azure). +* An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md). +* An Azure storage account. If you don't have one, see [Create a storage account](../storage/common/storage-account-create.md). + * [Create a container](/storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account to hold the backup data. + +* The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps). -All of the tasks that you do on resources using the Azure Resource Manager must be authenticated with Azure Active Directory using the following steps: +## Configure storage account access +When running a backup or restore operation, you need to configure access to the storage account. API Management supports two storage access mechanisms: an Azure Storage access key, or an API Management managed identity. -- Add an application to the Azure Active Directory tenant.-- Set permissions for the application that you added.-- Get the token for authenticating requests to Azure Resource Manager.+### Configure storage account access key -### Create an Azure Active Directory application +Azure generates two 512-bit storage account access keys for each storage account. These keys can be used to authorize access to data in your storage account via Shared Key authorization. To view, retrieve, and manage the keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal). -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Using the subscription that contains your API Management service instance, navigate to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) to register an app in Active Directory. - > [!NOTE] - > If the Azure Active Directory default directory isn't visible to your account, contact the administrator of the Azure subscription to grant the required permissions to your account. -1. Select **+ New registration**. -1. On the **Register an application** page, set the values as follows: - - * Set **Name** to a meaningful name. - * Set **Supported account types** to **Accounts in this organizational directory only**. - * In **Redirect URI** enter a placeholder URL such as `https://resources`. It's a required field, but the value isn't used later. - * Select **Register**. +### Configure API Management managed identity -### Add permissions +> [!NOTE] +> Using an API Management managed identity for storage operations during backup and restore is supported in API Management REST API version `2021-04-01-preview` or later. -1. Once the application is created, select **API permissions** > **+ Add a permission**. -1. Select **Microsoft APIs**. -1. Select **Azure Service Management**. +1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance. - :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/add-app-permission.png" alt-text="Screenshot that shows how to add app permissions."::: + * If you enable a user-assigned managed identity, take note of the identity's **Client ID**. + * If you will back up and restore to different API Management instances, enable a managed identity in both the source and target instances. +1. Assign the identity the **Storage Blob Data Contributor** role, scoped to the storage account used for backup and restore. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools. -1. Click **Delegated Permissions** beside the newly added application, and check the box for **Access Azure Service Management as organization users (preview)**. - :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/delegated-app-permission.png" alt-text="Screenshot that shows adding delegated app permissions."::: +## Back up an API Management service -1. Select **Add permissions**. +### [PowerShell](#tab/powershell) -### Configure your app +[Sign in](/powershell/azure/authenticate-azureps) with Azure PowerShell. -Before calling the APIs that generate the backup and restore, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token. +In the following examples: -> [!IMPORTANT] -> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details. +* An API Management instance named *myapim* is in resource group *apimresourcegroup*. +* A storage account named *backupstorageaccount* is in resource group *storageresourcegroup*. The storage account has a container named *backups*. +* A backup blob will be created with name *ContosoBackup.apimbackup*. -```csharp -using Microsoft.IdentityModel.Clients.ActiveDirectory; -using System; +Set variables in PowerShell: -namespace GetTokenResourceManagerRequests -{ - class Program - { - static void Main(string[] args) - { - var authenticationContext = new AuthenticationContext("https://login.microsoftonline.com/{tenant id}"); - var result = authenticationContext.AcquireTokenAsync("https://management.azure.com/", "{application id}", new Uri("{redirect uri}"), new PlatformParameters(PromptBehavior.Auto)).Result; -- if (result == null) { - throw new InvalidOperationException("Failed to obtain the JWT token"); - } -- Console.WriteLine(result.AccessToken); -- Console.ReadLine(); - } - } -} +```powershell +$apiManagementName="myapim"; +$apiManagementResourceGroup="apimresourcegroup"; +$storageAccountName="backupstorageaccount"; +$storageResourceGroup="storageresourcegroup"; +$containerName="backups"; +$blobName="ContosoBackup.apimbackup" ``` -Replace `{tenant id}`, `{application id}`, and `{redirect uri}` using the following instructions: --1. Replace `{tenant id}` with the tenant ID of the Azure Active Directory application you created. You can access the ID by clicking **App registrations** -> **Endpoints**. -- ![Endpoints][api-management-endpoint] +### Access using storage access key -2. Replace `{application id}` with the value you get by navigating to the **Settings** page. -3. Replace the `{redirect uri}` with the value from the **Redirect URIs** tab of your Azure Active Directory application. +```powershell +$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageResourceGroup -StorageAccountName $storageAccountName)[0].Value - Once the values are specified, the code example should return a token similar to the following example: +$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey - ![Token][api-management-arm-token] +Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -TargetContainerName $containerName -TargetBlobName $blobName +``` - > [!NOTE] - > The token may expire after a certain period. Execute the code sample again to generate a new token. +### Access using managed identity -## Accessing Azure Storage -API Management uses an Azure Storage account that you specify for backup and restore operations. When running a backup or restore operation, you need to configure access to the storage account. API Management supports two storage access mechanisms: an Azure Storage access key (the default), or an API Management managed identity. +To configure a managed identity in your API Management instance to access the storage account, see [Configure a managed identity](#configure-api-management-managed-identity), earlier in this article. -### Configure storage account access key +#### Access using system-assigned managed identity -For steps, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal). +```powershell +$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -### Configure API Management managed identity +Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -TargetContainerName $containerName ` + -TargetBlobName $blobName -AccessType "SystemAssignedManagedIdentity" +``` -> [!NOTE] -> Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later. +#### Access using user-assigned managed identity -1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance. +In this example, a user-assigned managed identity named *myidentity* is in resource group *identityresourcegroup*. - * If you enable a user-assigned managed identity, take note of the identity's **Client ID**. - * If you will back up and restore to different API Management instances, enable a managed identity in both the source and target instances. -1. Assign the identity the **Storage Blob Data Contributor** role, scoped to the storage account used for backup and restore. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools. +```powershell +$identityName = "myidentity"; +$identityResourceGroup = "identityresourcegroup"; -## Calling the backup and restore operations +$identityId = (Get-AzUserAssignedIdentity -Name $identityName -ResourceGroupName $identityResourceGroup).ClientId -The REST APIs are [Api Management Service - Backup](/rest/api/apimanagement/current-ga/api-management-service/backup) and [Api Management Service - Restore](/rest/api/apimanagement/current-ga/api-management-service/restore). +$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -> [!NOTE] -> Backup and restore operations can also be performed with PowerShell [_Backup-AzApiManagement_](/powershell/module/az.apimanagement/backup-azapimanagement) and [_Restore-AzApiManagement_](/powershell/module/az.apimanagement/restore-azapimanagement) commands respectively. +Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -TargetContainerName $containerName ` + -TargetBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid +``` -Before calling the "backup and restore" operations described in the following sections, set the authorization request header for your REST call. +Backup is a long-running operation that may take several minutes to complete. -```csharp -request.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + token); -``` +### [REST](#tab/rest) -### <a name="step1"> </a>Back up an API Management service +See [Azure REST API reference](/rest/api/azure/) for information about authenticating and calling Azure REST APIs. -To back up an API Management service issue the following HTTP request: +To back up an API Management service, issue the following HTTP request: ```http POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/backup?api-version={api-version} where: - `subscriptionId` - ID of the subscription that holds the API Management service you're trying to back up - `resourceGroupName` - name of the resource group of your Azure API Management service - `serviceName` - the name of the API Management service you're making a backup of specified at the time of its creation-- `api-version` - a valid REST API version such as `2020-12-01` or `2021-04-01-preview`.+- `api-version` - a valid REST API version such as `2021-08-01` or `2021-04-01-preview`. In the body of the request, specify the target storage account name, blob container name, backup name, and the storage access type. If the storage container doesn't exist, the backup operation creates it. -#### Access using storage access key +### Access using storage access key ```json { In the body of the request, specify the target storage account name, blob contai } ``` -#### Access using managed identity +### Access using managed identity > [!NOTE] > Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later. -**Access using system-assigned managed identity** +#### Access using system-assigned managed identity ```json { In the body of the request, specify the target storage account name, blob contai } ``` -**Access using user-assigned managed identity** +#### Access using user-assigned managed identity ```json { In the body of the request, specify the target storage account name, blob contai Set the value of the `Content-Type` request header to `application/json`. -Backup is a long-running operation that may take more than a minute to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation. +Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation. -### <a name="step2"> </a>Restore an API Management service +++## Restore an API Management service ++> [!CAUTION] +> Avoid changes to the service configuration (for example, APIs, policies, developer portal appearance) while restore operation is in progress. Changes **could be overwritten**. ++### [PowerShell](#tab/powershell) ++In the following examples, ++* An API Management instance named *myapim* is restored from the backup blob named *ContosoBackup.apimbackup* in storage account *backupstorageaccount*. +* The backup blob is in a container named *backups*. ++Set variables in PowerShell: ++```powershell +$apiManagementName="myapim"; +$apiManagementResourceGroup="apimresourcegroup"; +$storageAccountName="backupstorageaccount"; +$storageResourceGroup="storageresourcegroup"; +$containerName="backups"; +$blobName="ContosoBackup.apimbackup; +``` ++### Access using storage access key ++```powershell +$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageResourceGroup -StorageAccountName $storageAccountName)[0].Value ++$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey$st ++Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -SourceContainerName $containerName -SourceBlobName $blobName +``` ++### Access using managed identity ++To configure a managed identity in your API Management instance to access the storage account, see [Configure a managed identity](#configure-api-management-managed-identity), earlier in this article. ++#### Access using system-assigned managed identity ++```powershell +$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName ++Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -SourceContainerName $containerName ` + -SourceBlobName $blobName -AccessType "SystemAssignedManagedIdentity" +``` ++#### Access using user-assigned managed identity ++In this example, a user-assigned managed identity named *myidentity* is in resource group *identityresourcegroup*. ++```powershell +$identityName = "myidentity"; +$identityResourceGroup = "identityresourcegroup"; ++$identityId = (Get-AzUserAssignedIdentity -Name $identityName -ResourceGroupName $identityResourceGroup).ClientId ++$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName ++Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName ` + -StorageContext $storageContext -SourceContainerName $containerName ` + -SourceBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid +``` ++Restore is a long-running operation that may take up to 45 minutes or more to complete. ++### [REST](#tab/rest) To restore an API Management service from a previously created backup, make the following HTTP request: where: - `subscriptionId` - ID of the subscription that holds the API Management service you're restoring a backup into - `resourceGroupName` - name of the resource group that holds the Azure API Management service you're restoring a backup into - `serviceName` - the name of the API Management service being restored into specified at its creation time-- `api-version` - a valid REST API version such as `2020-12-01` or `2021-04-01-preview`+- `api-version` - a valid REST API version such as `2021-08-01` or `2021-04-01-preview` In the body of the request, specify the existing storage account name, blob container name, backup name, and the storage access type. -#### Access using storage access key +### Access using storage access key ```json { In the body of the request, specify the existing storage account name, blob cont } ``` -#### Access using managed identity +### Access using managed identity > [!NOTE] > Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later. -**Access using system-assigned managed identity** +#### Access using system-assigned managed identity ```json { In the body of the request, specify the existing storage account name, blob cont } ``` -**Access using user-assigned managed identity** +#### Access using user-assigned managed identity ```json { In the body of the request, specify the existing storage account name, blob cont Set the value of the `Content-Type` request header to `application/json`. -Restore is a long-running operation that may take up to 30 or more minutes to complete. If the request succeeded and the restore process began, you receive a `202 Accepted` response status code with a `Location` header. Make 'GET' requests to the URL in the `Location` header to find out the status of the operation. While the restore is in progress, you continue to receive a `202 Accepted` status code. A response code of `200 OK` indicates successful completion of the restore operation. +Restore is a long-running operation that may take up to 30 or more minutes to complete. If the request succeeded and the restore process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the restore is in progress, you continue to receive a `202 Accepted` status code. A response code of `200 OK` indicates successful completion of the restore operation. -> [!IMPORTANT] -> **The SKU** of the service being restored into **must match** the SKU of the backed-up service being restored. -> -> **Changes** made to the service configuration (for example, APIs, policies, developer portal appearance) while restore operation is in progress **could be overwritten**. + -## Constraints when making backup or restore request +## Constraints -- While backup is in progress, **avoid management changes in the service** such as SKU upgrade or downgrade, change in domain name, and more. - Restore of a **backup is guaranteed only for 30 days** since the moment of its creation.+- While backup is in progress, **avoid management changes in the service** such as pricing tier upgrade or downgrade, change in domain name, and more. - **Changes** made to the service configuration (for example, APIs, policies, and developer portal appearance) while backup operation is in process **might be excluded from the backup and will be lost**.--- [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) should **not** be enabled on the Blob Service in the Azure Storage Account.-- **The SKU** of the service being restored into **must match** the SKU of the backed-up service being restored.+- Backup doesn't capture pre-aggregated log data used in reports shown on the **Analytics** window in the Azure portal. +- [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) should **not** be enabled on the Blob service in the storage account. +- **The pricing tier** of the service being restored into **must match** the pricing tier of the backed-up service being restored. ## Storage networking constraints ### Access using storage access key -If the storage account is **[firewall][azure-storage-ip-firewall] enabled** and a storage key is used for access, then the customer must **Allow** the set of [Azure API Management control plane IP addresses][control-plane-ip-address] on their storage account for backup or restore to work. The storage account can be in any Azure region except the one where the API Management service is located. For example, if the API Management service is in West US, then the Azure Storage account can be in West US 2 and the customer needs to open the control plane IP 13.64.39.16 (API Management control plane IP of West US) in the firewall. This is because the requests to Azure Storage are not SNATed to a public IP from compute (Azure API Management control plane) in the same Azure region. Cross-region storage requests will be SNATed to the public IP address. +If the storage account is **[firewall][azure-storage-ip-firewall] enabled** and a storage key is used for access, then the customer must **Allow** the set of [Azure API Management control plane IP addresses][control-plane-ip-address] on their storage account for backup or restore to work. The storage account can be in any Azure region except the one where the API Management service is located. For example, if the API Management service is in West US, then the Azure Storage account can be in West US 2 and the customer needs to open the control plane IP 13.64.39.16 (API Management control plane IP of West US) in the firewall. This is because the requests to Azure Storage aren't SNATed to a public IP from compute (Azure API Management control plane) in the same Azure region. Cross-region storage requests will be SNATed to the public IP address. ### Access using managed identity If an API Management system-assigned managed identity is used to access a firewa - [Protocols and ciphers](api-management-howto-manage-protocols-ciphers.md) settings. - [Developer portal](developer-portal-faq.md#is-the-portals-content-saved-with-the-backuprestore-functionality-in-api-management) content. -The frequency with which you perform service backups affect your recovery point objective. To minimize it, we recommend implementing regular backups and performing on-demand backups after you make changes to your API Management service. +The frequency with which you perform service backups affects your recovery point objective. To minimize it, we recommend implementing regular backups and performing on-demand backups after you make changes to your API Management service. ## Next steps Check out the following related resources for the backup/restore process: - [Automating API Management Backup and Restore with Logic Apps](https://github.com/Azure/api-management-samples/tree/master/tutorials/automating-apim-backup-restore-with-logic-apps) - [How to move Azure API Management across regions](api-management-howto-migrate.md)--API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location). +- API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location). [backup an api management service]: #step1 [restore an api management service]: #step2 |
api-management | Api Management Howto Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-migrate.md | -#cusomerintent: As an Azure service administrator, I want to move my service resources to another Azure region. +#customerintent: As an Azure service administrator, I want to move my service resources to another Azure region. # How to move Azure API Management across regions To move API Management instances from one Azure region to another, use the servi ### Option 1: Use a different API Management instance name 1. In the target region, create a new API Management instance with the same pricing tier as the source API Management instance. Use a different name for the new instance.-1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account. -1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance. +1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#back-up-an-api-management-service) the existing API Management instance to the storage account. +1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#restore-an-api-management-service) the source instance's backup to the new API Management instance. 1. If you have a custom domain pointing to the source region API Management instance, update the custom domain CNAME to point to the new API Management instance. ### Option 2: Use the same API Management instance name To move API Management instances from one Azure region to another, use the servi > [!WARNING] > This option deletes the original API Management instance and results in downtime during the migration. Ensure that you have a valid backup before deleting the source instance. -1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account. +1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#back-up-an-api-management-service) the existing API Management instance to the storage account. 1. Delete the API Management instance in the source region. 1. Create a new API Management instance in the target region with the same name as the one in the source region.-1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance in the target region. +1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#restore-an-api-management-service) the source instance's backup to the new API Management instance in the target region. ## Verify |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | Last updated 03/22/2022 ms.devlang: python-+ # Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
automation | Create Azure Automation Account Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md | + + Title: Quickstart - Create an Azure Automation account using the portal +description: This quickstart helps you to create a new Automation account using Azure portal. + Last updated : 10/26/2021++++#Customer intent: As an administrator, I want to create an Automation account so that I can further use the Automation services. +++# Quickstart: Create an Automation account using the Azure portal ++You can create an Azure [Automation account](../automation-security-overview.md) using the Azure portal, a browser-based user interface allowing access to a number of resources. One Automation account can manage resources across all regions and subscriptions for a given tenant. This Quickstart guides you in creating an Automation account. ++## Prerequisites ++An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++## Create Automation account ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. From the top menu, select **+ Create a resource**. ++1. Under **Categories**, select **IT & Management Tools**, and then select **Automation**. ++ :::image type="content" source="./media/create-account-portal/automation-account-portal.png" alt-text="Locating Automation accounts in portal."::: ++Options for your new Automation account are organized into tabs in the **Create an Automation Account** page. The following sections describe each of the tabs and their options. ++### Basics ++On the **Basics** tab, provide the essential information for your Automation account. After you complete the **Basics** tab, you can choose to further customize your new Automation account by setting options on the other tabs, or you can select **Review + create** to accept the default options and proceed to validate and create the account. ++> [!NOTE] +> By default, a system-assigned managed identity is enabled for the Automation account. ++The following table describes the fields on the **Basics** tab. ++| **Field** | **Required**<br> **or**<br> **optional** |**Description** | +|||| +|Subscription|Required |From the drop-down list, select the Azure subscription for the account.| +|Resource group|Required |From the drop-down list, select your existing resource group, or select **Create new**.| +|Automation account name|Required |Enter a name unique for its location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. | +|Region|Required |From the drop-down list, select a region for the account. For an updated list of locations that you can deploy an Automation account to, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=automation®ions=all).| ++The following image shows a standard configuration for a new Automation account. +++### Advanced ++On the **Advanced** tab, you can configure the managed identity option for your new Automation account. The user-assigned managed identity option can also be configured after the Automation account is created. ++For instructions on how to create a user-assigned managed identity, see [Create a user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). ++The following table describes the fields on the **Advanced** tab. ++| **Field** | **Required**<br> **or**<br> **optional** |**Description** | +|||| +|System-assigned |Optional |An Azure Active Directory identity that is tied to the lifecycle of the Automation account. | +|User-assigned |Optional |A managed identity represented as a standalone Azure resource that is managed separately from the resources that use it.| ++You can choose to enable managed identities later, and the Automation account is created without one. To enable a managed identity after the account is created, see [Enable managed identity](enable-managed-identity.md). If you select both options, for the user-assigned identity, select the **Add user assigned identities** option. On the **Select user assigned managed identity** page, select a subscription and add one or more user-assigned identities created in that subscription to assign to the Automation account. ++The following image shows a standard configuration for a new Automation account. +++### Networking ++On the **Networking** tab, you can connect to your automation account either publicly (via public IP addresses), or privately, using a private endpoint. The following image shows the connectivity configuration that you can define for a new automation account. ++- **Public Access** ΓÇô This default option provides a public endpoint for the Automation account that can receive traffic over the internet and does not require any additional configuration. However, we don't recommend it for private applications or secure environments. Instead, the second option **Private access**, a private Link mentioned below can be leveraged to restrict access to automation endpoints only from authorized virtual networks. Public access can simultaneously coexist with the private endpoint enabled on the Automation account. If you select public access while creating the Automation account, you can add a Private endpoint later from the Networking blade of the Automation Account. ++- **Private Access** ΓÇô This option provides a private endpoint for the Automation account that uses a private IP address from your virtual network. This network interface connects you privately and securely to the Automation account. You bring the service into your virtual network by enabling a private endpoint. This is the recommended configuration from a security point of view; however, this requires you to configure Hybrid Runbook Worker connected to an Azure virtual network & currently does not support cloud jobs. +++### Tags ++On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md). ++### Review + create tab ++When you navigate to the **Review + create** tab, Azure runs validation on the Automation account settings that you have chosen. If validation passes, you can proceed to create the Automation account. ++If validation fails, then the portal indicates which settings need to be modified. ++Review your new Automation account. +++## Clean up resources ++If you're not going to continue to use the Automation account, select **Delete** from the **Overview** page, and then select **Yes** when prompted. ++## Next steps ++In this Quickstart, you created an Automation account. To use managed identities with your Automation account, continue to the next Quickstart: ++> [!div class="nextstepaction"] +> [Tutorial - Create Automation PowerShell runbook using managed identity](../learn/powershell-runbook-managed-identity.md) |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 # |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-cache-for-redis | Cache How To Active Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md | Use the Azure CLI for creating a new cache and geo-replication group, or to add #### Create new Enterprise instance in a new geo-replication group using Azure CLI -This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called `replicationGroup`: +This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: ```azurecli-interactive az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "En To configure active geo-replication properly, the ID of the cache instance being created must be added with the `--linked-databases` parameter. The ID is in the format: -`/subscriptions/\<your-subscription-ID>/resourceGroups/\<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/\<your-cache-name>/databases/default` +`/subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/<your-cache-name>/databases/default` #### Create new Enterprise instance in an existing geo-replication group using Azure CLI This example creates a new Cache for Redis Enterprise E10 instance called _Cache2_ in the West US region. Then, the cache is added to the `replicationGroup` active geo-replication group created above. This way, it's linked in an active-active configuration with Cache1.-<!-- love the simple, declarative sentences. I am once again add the full product name --> ```azurecli-interactive az redisenterprise create --location "West US" --cluster-name "Cache2" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default" Use Azure PowerShell to create a new cache and geo-replication group, or to add #### Create new Enterprise instance in a new geo-replication group using PowerShell -This example creates a new Azure Cache for Redis Enterprise E10 cache instance called "Cache1" in the East US region. Then, the cache is added to a new active geo-replication group called `replicationGroup`: +This example creates a new Azure Cache for Redis Enterprise E10 cache instance called "Cache1" in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_: ```powershell-interactive New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}' New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" - To configure active geo-replication properly, the ID of the cache instance being created must be added with the `-LinkedDatabase` parameter. The ID is in the format: -`id:"/subscriptions/\<your-subscription-ID>/resourceGroups/\<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/\<your-cache-name>/databases/default` +`/subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/<your-cache-name>/databases/default` #### Create new Enterprise instance in an existing geo-replication group using PowerShell |
azure-cache-for-redis | Cache Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md | Before deploying, you must define what is needed to run the model as a web servi > > If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It may also transform the response before returning it to the client. >- > By default when packaging for functions, the input is treated as text. If you are interested in consuming the raw bytes of the input (for instance for Blob triggers), you should use [AMLRequest to accept raw data](../machine-learning/how-to-deploy-advanced-entry-script.md#binary-data). + > By default when packaging for functions, the input is treated as text. If you are interested in consuming the raw bytes of the input (for instance for Blob triggers), you should use [AMLRequest to accept raw data](../machine-learning/v1/how-to-deploy-advanced-entry-script.md#binary-data). For the run function, ensure it connects to a Redis endpoint. When `show_output=True`, the output of the Docker build process is shown. Once t Save the value for **username** and one of the **passwords**. -1. If you don't already have a resource group or app service plan to deploy the service, the these commands demonstrate how to create both: +1. If you don't already have a resource group or app service plan to deploy the service, these commands demonstrate how to create both: ```azurecli-interactive az group create --name myresourcegroup --location "West Europe" |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | For current Azure Government regions and available services, see [Products avail This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope-*Last updated: February 2022* +*Last updated: August 2022* ### Terminology used This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Notification Hubs](../../notification-hubs/index.yml) | ✅ | ✅ | | [Open Datasets](../../open-datasets/index.yml) | ✅ | ✅ | | [Peering Service](../../peering-service/index.yml) | ✅ | ✅ |+| [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | ✅ | ✅ | | [Power Apps](/powerapps/) | ✅ | ✅ | | [Power Apps Portal](https://powerapps.microsoft.com/portals/) | ✅ | ✅ | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | ✅ | ✅ | | [Resource Graph](../../governance/resource-graph/index.yml) | ✅ | ✅ | | **Service** | **FedRAMP High** | **DoD IL2** |+| [Resource Mover](../../resource-mover/index.yml) | ✅ | ✅ | +| [Route Server](../../route-server/index.yml) | ✅ | ✅ | | [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | ✅ | ✅ | | [Service Bus](../../service-bus-messaging/index.yml) | ✅ | ✅ | | [Service Fabric](../../service-fabric/index.yml) | ✅ | ✅ | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and ****** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope-*Last updated: June 2022* +*Last updated: August 2022* ### Terminology used This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Azure Sign-up portal](https://signup.azure.com/) | ✅ | ✅ | | | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) ***** | ✅ | ✅ | ✅ | ✅ | ✅ |+| [Azure Video Indexer](../../azure-video-indexer/index.yml) | ✅ | ✅ | | | | | [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Backup](../../backup/index.yml) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Bastion](../../bastion/index.yml) | ✅ | ✅ | ✅ | ✅ | | |
azure-monitor | Alerts Common Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md | Title: Common alert schema for Azure Monitor alerts -description: Understanding the common alert schema, why you should use it and how to enable it +description: Understand the common alert schema, why you should use it, and how to enable it. Last updated 03/14/2019 This article describes what the common alert schema is, the benefits of using it ## What is the common alert schema? -The common alert schema standardizes the consumption experience for alert notifications in Azure today. Historically, the three alert types in Azure today (metric, log, and activity log) have had their own email templates, webhook schemas, etc. With the common alert schema, you can now receive alert notifications with a consistent schema. +The common alert schema standardizes the consumption experience for alert notifications in Azure. Today, Azure has three alert types, metric, log, and activity log. Historically, they've had their own email templates and webhook schemas. With the common alert schema, you can now receive alert notifications with a consistent schema. -Any alert instance describes **the resource that was affected** and **the cause of the alert**, and these instances are described in the common schema in the following sections: +Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections: -- **Essentials**: A set of **standardized fields**, common across all alert types, which describe **what resource** the alert is on along with additional common alert metadata (for example, severity or description).-- **Alert context**: A set of fields which describe the **cause of the alert**, with fields that vary **based on the alert type**. For example, a metric alert would have fields like the metric name and metric value in the alert context, whereas an activity log alert would have information about the event that generated the alert.+- **Essentials**: Standardized fields, common across all alert types, describe what resource the alert is on along with other common alert metadata. Examples include severity or description. +- **Alert context**: These fields describe the cause of the alert, with fields that vary based on the alert type. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert. -The typical integration scenarios we hear from customers involve the routing of the alert instance to the concerned team based on some pivot (for example, resource group), after which the responsible team starts working on it. With the common alert schema, you can have standardized routing logic across alert types by leveraging the essential fields, leaving the context fields as is for the concerned teams to investigate further. +You might want to route the alert instance to a specific team based on a pivot such as a resource group. The common schema uses the essential fields to provide standardized routing logic for all alert types. The team can use the context fields for their investigation. -This means that you can potentially have fewer integrations, making the process of managing and maintaining them a _much_ simpler task. Additionally, future alert payload enrichments (for example, customization, diagnostic enrichment, etc.) will only surface up in the common schema. +As a result, you can potentially have fewer integrations, which makes the process of managing and maintaining them a much simpler task. Future alert payload enrichments like customization and diagnostic enrichment will only surface in the common schema. ## What enhancements does the common alert schema bring? -The common alert schema will primarily manifest itself in your alert notifications. The enhancements that you will see are listed below: +You'll see the benefits of using a common alert schema in your alert notifications. A common alert schema provides these benefits: | Action | Enhancements| |:|:|-| Email | A consistent and detailed email template, allowing you to easily diagnose issues at a glance. Embedded deep-links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. | -| Webhook/Logic App/Azure Function/Automation Runbook | A consistent JSON structure for all alert types, which allows you to easily build integrations across the different alert types. | +| Email | A consistent and detailed email template. You can use it to easily diagnose issues at a glance. Embedded deep links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. | +| Webhook/Azure Logic Apps/Azure Functions/Azure Automation runbook | A consistent JSON structure for all alert types. You can use it to easily build integrations across the different alert types. | The new schema will also enable a richer alert consumption experience across both the Azure portal and the Azure mobile app in the immediate future. -[Learn more about the schema definitions for Webhooks/Logic Apps/Azure Functions/Automation Runbooks.](./alerts-common-schema-definitions.md) +Learn more about the [schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md). > [!NOTE]-> The following actions do not support the common alert schema: ITSM Connector. +> The following actions don't support the common alert schema ITSM Connector. ## How do I enable the common alert schema? -You can opt in or opt out to the common alert schema through Action Groups, on both the portal and through the REST API. The toggle to switch to the new schema exists at an action level. For example, you have to separately opt in for an email action and a webhook action. +Use action groups in the Azure portal or use the REST API to enable the common alert schema. You can enable a new schema at the action level. For example, you must separately opt in for an email action and a webhook action. > [!NOTE]-> 1. The following alert types support the common schema by default (no opt-in required): -> - Smart detection alerts -> 1. The following alert types currently do not support the common schema: -> - Alerts generated by [VM insights](../vm/vminsights-overview.md) +> Smart detection alerts support the common schema by default. No opt-in is required. +> +> Alerts generated by [VM insights](../vm/vminsights-overview.md) currently don't support the common schema. +> ### Through the Azure portal - + -1. Open any existing or a new action in an action group. -1. Select ΓÇÿYesΓÇÖ for the toggle to enable the common alert schema as shown. +1. Open any existing action or a new action in an action group. +1. Select **Yes** to enable the common alert schema. ### Through the Action Groups REST API -You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. While making the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call, you can set the flag "useCommonAlertSchema" to 'true' (to opt in) or 'false' (to opt out) for any of the following actions - email/webhook/logic app/Azure Function/automation runbook. +You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. While you make the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call, you can set the flag "useCommonAlertSchema" to `true` to opt in or `false` to opt out for email, webhook, Logic Apps, Azure Functions, or Automation runbook actions. -For example, the following request body made to the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API will do the following: +For example, the following request body made to the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API will: -- Enable the common alert schema for the email action "John Doe's email"-- Disable the common alert schema for the email action "Jane Smith's email"-- Enable the common alert schema for the webhook action "Sample webhook"+- Enable the common alert schema for the email action "John Doe's email." +- Disable the common alert schema for the email action "Jane Smith's email." +- Enable the common alert schema for the webhook action "Sample webhook." ```json { For example, the following request body made to the [create or update](/rest/api ## Next steps -- [Common alert schema definitions for Webhooks/Logic Apps/Azure Functions/Automation Runbooks.](./alerts-common-schema-definitions.md)-- [Learn how to create a logic app that leverages the common alert schema to handle all your alerts.](./alerts-common-schema-integrations.md)+- [Learn the common alert schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md) +- [Learn how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md) |
azure-monitor | Alerts Processing Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md | Title: Alert processing rules for Azure Monitor alerts -description: Understanding what alert processing rules in Azure Monitor are and how to configure and manage them. +description: Understand what alert processing rules in Azure Monitor are and how to configure and manage them. Last updated 2/23/2022 -> The previous name for alert processing rules was **action rules**. The Azure resource type of these rules remains **Microsoft.AlertsManagement/actionRules** for backward compatibility. +> The previous name for alert processing rules was action rules. The Azure resource type of these rules remains **Microsoft.AlertsManagement/actionRules** for backward compatibility. -Alert processing rules allow you to apply processing on **fired alerts**. You may be familiar with Azure Monitor alert rules, which are rules that generate new alerts. Alert processing rules are different; they are rules that modify the fired alerts themselves as they are being fired. You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. Alert processing rules can be applied to different resource scopes, from a single resource to an entire subscription. They can also allow you to apply various filters or have the rule work on a pre-defined schedule. +Alert processing rules allow you to apply processing on fired alerts. You might be familiar with Azure Monitor alert rules, which are rules that generate new alerts. Alert processing rules are different. They're rules that modify the fired alerts themselves as they're being fired. -## What are alert processing rules useful for? +You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. You can apply alert processing rules to different resource scopes, from a single resource, or to an entire subscription. You can also use them to apply various filters or have the rule work on a predefined schedule. -Some common use cases for alert processing rules include: +Some common use cases for alert processing rules are described here. -### Notification suppression during planned maintenance +## Suppress notifications during planned maintenance -Many customers set up a planned maintenance time for their resources, either on a one-off basis or on a regular schedule. The planned maintenance may cover a single resource like a virtual machine, or multiple resources like all virtual machines in a resource group. So, you may want to stop receiving alert notifications for those resources during the maintenance window. In other cases, you may prefer to not receive alert notifications at all outside of your business hours. Alert processing rules allow you to achieve that. +Many customers set up a planned maintenance time for their resources, either on a one-time basis or on a regular schedule. The planned maintenance might cover a single resource, like a virtual machine, or multiple resources, like all virtual machines in a resource group. So, you might want to stop receiving alert notifications for those resources during the maintenance window. In other cases, you might prefer to not receive alert notifications outside of your business hours. Alert processing rules allow you to achieve that. ++You could alternatively suppress alert notifications by disabling the alert rules themselves at the beginning of the maintenance window. Then you can reenable them after the maintenance is over. In that case, the alerts won't fire in the first place. That approach has several limitations: ++ * This approach is only practical if the scope of the alert rule is exactly the scope of the resources under maintenance. For example, a single alert rule might cover multiple resources, but only a few of those resources are going through maintenance. So, if you disable the alert rule, you won't be alerted when the remaining resources covered by that rule run into issues. + * You might have many alert rules that cover the resource. Updating all of them is time consuming and error prone. + * You might have some alerts that aren't created by an alert rule at all, like alerts from Azure Backup. -You could alternatively suppress alert notifications by disabling the alert rules themselves at the beginning of the maintenance window, and re-enabling them once the maintenance is over. In that case, the alerts won't fire in the first place. However, that approach has several limitations: - * This approach is only practical if the scope of the alert rule is exactly the scope of the resources under maintenance. For example, a single alert rule might cover multiple resources, but only a few of those resources are going through maintenance. So, if you disable the alert rule, you will not be alerted when the remaining resources covered by that rule run into issues. - * You may have many alert rules that cover the resource. Updating all of them is time consuming and error prone. - * You might have some alerts that are not created by an alert rule at all, like alerts from Azure Backup. - In all these cases, an alert processing rule provides an easy way to achieve the notification suppression goal. -### Management at scale +## Management at scale -Most customers tend to define a few action groups that are used repeatedly in their alert rules. For example, they may want to call a specific action group whenever any high severity alert is fired. As their number of alert rule grows, manually making sure that each alert rule has the right set of action groups is becoming harder. +Most customers tend to define a few action groups that are used repeatedly in their alert rules. For example, they might want to call a specific action group whenever any high-severity alert is fired. As their number of alert rules grows, manually making sure that each alert rule has the right set of action groups is becoming harder. -Alert processing rules allow you to specify that logic in a single rule, instead of having to set it consistently in all your alert rules. They also cover alert types that are not generated by an alert rule. +Alert processing rules allow you to specify that logic in a single rule, instead of having to set it consistently in all your alert rules. They also cover alert types that aren't generated by an alert rule. -### Add action groups to all alert types +## Add action groups to all alert types Azure Monitor alert rules let you select which action groups will be triggered when their alerts are fired. However, not all Azure alert sources let you specify action groups. Some examples of such alerts include [Azure Backup alerts](../../backup/backup-azure-monitoring-built-in-monitor.md), [VM Insights guest health alerts](../vm/vminsights-health-alerts.md), [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md), and Azure Stack Hub. For those alert types, you can use alert processing rules to add action groups. > [!NOTE]-> Alert processing rules do not affect [Azure Service Health](../../service-health/service-health-overview.md) alerts. +> Alert processing rules don't affect [Azure Service Health](../../service-health/service-health-overview.md) alerts. -## Alert processing rule properties +## Scope and filters for alert processing rules <a name="filter-criteria"></a> -An alert processing rule definition covers several aspects: --### Which fired alerts are affected by this rule? --**SCOPE** -Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, or specific resource group, or an entire subscription. **The alert processing rule will apply to alerts that fired on resources within that scope**. --**FILTERS** -You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are: --* **Alert Context (payload)** - the rule will apply only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. -* **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. -You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value. You can also locate it by listing your alert rules from PowerShell or CLI. -* **Alert rule name** - the rule will apply only to alerts with this alert rule name. Can also be useful with a "Contains" operator. -* **Description** - the rule will apply only to alerts that contain the specified string within the alert rule description field. -* **Monitor condition** - the rule will apply only to alerts with the specified monitor condition, either "Fired" or "Resolved". -* **Monitor service** - the rule will apply only to alerts from any of the specified monitor services. -For example, use "Platform" to have the rule apply only to metric alerts. -* **Resource** - the rule will apply only to alerts from the specified Azure resource. -For example, you can use this filter with "Does not equal" to exclude one or more resources when the rule's scope is a subscription. -* **Resource group** - the rule will apply only to alerts from the specified resource groups. -For example, you can use this filter with "Does not equal" to exclude one or more resource groups when the rule's scope is a subscription. -* **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines. You can use "Equals" to match one or more specific resources, or you can use contains to match a resource type and all its child resources. -For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases. -* **Severity** - the rule will apply only to alerts with the selected severities. --**FILTERS BEHAVIOR** -* If you define multiple filters in a rule, all of them apply - there is a logical AND between all filters. - For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule will apply only for Sev0 alerts on virtual machines in the scope. -* Each filter may include up to five values, and there is a logical OR between the values. - For example, if you set `description contains ["this", "that"]`, then the rule will apply only to alerts whose description contains either "this" or "that". +An alert processing rule definition covers several aspects, as described here. ++### Which fired alerts are affected by this rule? ++This section describes the scope and filters for alert processing rules. ++Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. *The alert processing rule applies to alerts that fired on resources within that scope*. ++You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are described in the following table. ++| Filter | Description| +|:|:| +Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. | +Alert rule ID | The rule applies only to alerts from a specific alert rule. The value should be the full resource ID, for example, `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. To locate the alert rule ID, open a specific alert rule in the portal, select **Properties**, and copy the **Resource ID** value. You can also locate it by listing your alert rules from PowerShell or the Azure CLI. | +Alert rule name | The rule applies only to alerts with this alert rule name. It can also be useful with a **Contains** operator. | +Description | The rule applies only to alerts that contain the specified string within the alert rule description field. | +Monitor condition | The rule applies only to alerts with the specified monitor condition, either **Fired** or **Resolved**. | +Monitor service | The rule applies only to alerts from any of the specified monitor services. For example, use **Platform** to have the rule apply only to metric alerts. | +Resource | The rule applies only to alerts from the specified Azure resource. For example, you can use this filter with **Does not equal** to exclude one or more resources when the rule's scope is a subscription. | +Resource group | The rule applies only to alerts from the specified resource groups. For example, you can use this filter with **Does not equal** to exclude one or more resource groups when the rule's scope is a subscription. | +Resource type | The rule applies only to alerts on resources from the specified resource types, such as virtual machines. You can use **Equals** to match one or more specific resources. You can also use **Contains** to match a resource type and all its child resources. For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases. +Severity | The rule applies only to alerts with the selected severities. | ++#### Alert processing rule filters ++* If you define multiple filters in a rule, all the rules apply. There's a logical AND between all filters. + For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule applies only for `Sev0` alerts on virtual machines in the scope. +* Each filter can include up to five values. There's a logical OR between the values. + For example, if you set `description contains ["this", "that"]`, then the rule applies only to alerts whose description contains either `this` or `that`. ### What should this rule do? Choose one of the following actions: -* **Suppression** -This action removes all the action groups from the affected fired alerts. So, the fired alerts will not invoke any of their action groups (not even at the end of the maintenance window). Those fired alerts will still be visible when you list your alerts in the portal, Azure Resource Graph, API, PowerShell etc. -The suppression action has a higher priority over the "apply action groups" action - if a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed. --* **Apply action groups** -This action adds one or more action groups to the affected fired alerts. +* **Suppression**: This action removes all the action groups from the affected fired alerts. So, the fired alerts won't invoke any of their action groups, not even at the end of the maintenance window. Those fired alerts will still be visible when you list your alerts in the portal, Azure Resource Graph, API, or PowerShell. The suppression action has a higher priority over the **Apply action groups** action. If a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed. +* **Apply action groups**: This action adds one or more action groups to the affected fired alerts. ### When should this rule apply? -You may optionally control when will the rule apply. By default, the rule is always active. However, you can select a one-off window for this rule to apply, or have a recurring window such as a weekly recurrence. +You can control when the rule will apply. The rule is always active, by default. You can select a one-time window for this rule to apply, or you can have a recurring window, such as a weekly recurrence. -## Configuring an alert processing rule +## Configure an alert processing rule ### [Portal](#tab/portal) -You can access alert processing rules by navigating to the **Alerts** home page in Azure Monitor. -Once there, you can click **Alert processing rules** to see and manage your existing rules, or click **Create** --> **Alert processing rules** to open the new alert processing rule wizard. +You can access alert processing rules by going to the **Alerts** home page in Azure Monitor. Then you can select **Alert processing rules** to see and manage your existing rules. You can also select **Create** > **Alert processing rules** to open the new alert processing rule wizard. +++Let's review the new alert processing rule wizard. +1. On the **Scope** tab, you select which fired alerts are covered by this rule. Pick the **scope** of resources whose alerts will be covered. You can choose multiple resources and resource groups, or an entire subscription. You can also optionally add filters, as previously described. -Lets review the new alert processing rule wizard. -In the first tab (**Scope**), you select which fired alerts are covered by this rule. Pick the **scope** of resources whose alerts will be covered - you may choose multiple resources and resource groups, or an entire subscription. You may also optionally add **filters**, as documented above. + :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scope.png" alt-text="Screenshot that shows the Scope tab of the alert processing rules wizard."::: +1. On the **Rule settings** tab, you select which action to apply on the affected alerts. Choose between **Suppress notifications** or **Apply action group**. If you choose **Apply action group**, you can select existing action groups by selecting **Add action groups**. You can also create a new action group. -In the second tab (**Rule settings**), you select which action to apply on the affected alerts. Choose between **Suppression** or **Apply action group**. If you choose the apply action group, you can either select existing action groups by clicking **Add action groups**, or create a new action group. + :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-settings.png" alt-text="Screenshot that shows the Rule settings tab of the alert processing rules wizard."::: +1. On the **Scheduling** tab, you select an optional schedule for the rule. By default, the rule works all the time, unless you disable it. You can set it to work **On a specific time**, or you can set up a **Recurring** schedule. + + Let's see an example of a schedule for a one-time, overnight, planned maintenance. It starts in the evening and continues until the next morning, in a specific time zone. -In the third tab (**Scheduling**), you select an optional schedule for the rule. By default the rule works all the time, unless you disable it. However, you can set it to work **on a specific time**, or **set up a recurring schedule**. -Let's see an example of a schedule for a one-off, overnight, planned maintenance. It starts in the evening until the next morning, in a specific timezone: + :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scheduling-one-time.png" alt-text="Screenshot that shows the Scheduling tab of the alert processing rules wizard with a one-time rule."::: + An example of a more complex schedule covers an "outside of business hours" case. It has a recurring schedule with two recurrences. One recurrence is daily from the afternoon until the morning. The other recurrence is weekly and covers full days for Saturday and Sunday. -Let's see an example of a more complex schedule, covering an "outside of business hours" case. It has a recurring schedule with two recurrences - a daily one from the afternoon until the morning, and a weekly one covering Saturday and Sunday (full days). + :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scheduling-recurring.png" alt-text="Screenshot that shows the Scheduling tab of the alert processing rules wizard with a recurring rule."::: +1. On the **Details** tab, you give this rule a name, pick where it will be stored, and optionally add a description for your reference. -In the fourth tab (**Details**), you give this rule a name, pick where it will be stored, and optionally add a description for your reference. In the fifth tab (**Tags**), you optionally add tags to the rule, and finally in the last tab you can review and create the alert processing rule. +1. On the **Tags** tab, you can optionally add tags to the rule. ++1. On the **Review + create** tab, you can review and create the alert processing rule. ### [Azure CLI](#tab/azure-cli) -You can use the Azure CLI to work with alert processing rules. See the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule) for detailed documentation and examples. +You can use the Azure CLI to work with alert processing rules. For detailed documentation and examples, see the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule). ### Prepare your environment -1. **Install the Auzre CLI** -- Follow the [Installation instructions for the Azure CLI](/cli/azure/install-azure-cli). +1. Install the Azure CLI. - Alternatively, you can use Azure Cloud Shell, which is an interactive shell environment that you use through your browser. To start a Cloud Shell: + Follow the [installation instructions for the Azure CLI](/cli/azure/install-azure-cli). - - Open Cloud Shell by going to [https://shell.azure.com](https://shell.azure.com) + Alternatively, you can use Azure Cloud Shell, which is an interactive shell environment that you use through your browser. To start: - - Select the **Cloud Shell** button on the menu bar at the upper right corner in the [Azure portal](https://portal.azure.com) + - Open [Azure Cloud Shell](https://shell.azure.com). + - Select the **Cloud Shell** button on the menu bar in the upper-right corner in the [Azure portal](https://portal.azure.com). -1. **Sign in** +1. Sign in. - If you're using a local installation of the CLI, sign in using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process. + If you're using a local installation of the CLI, sign in by using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process. ```azurecli az login ``` -1. **Install the `alertsmanagement` extension** +1. Install the `alertsmanagement` extension. - In order to use the `az monitor alert-processing-rule` commands, install the `alertsmanagement` preview extension. + To use the `az monitor alert-processing-rule` commands, install the `alertsmanagement` preview extension. ```azurecli az extension add --name alertsmanagement You can use the Azure CLI to work with alert processing rules. See the `az monit The installed extension 'alertsmanagement' is in preview. ``` - To learn more about Azure CLI extensions, check [Use extension with Azure CLI](/cli/azure/azure-cli-extensions-overview?). + To learn more about Azure CLI extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview?). ### Create an alert processing rule with the Azure CLI az monitor alert-processing-rule create \ --description "Add action group AG1 to all alerts in the subscription" ``` -The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter. +The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) includes more examples and an explanation of each parameter. ### [PowerShell](#tab/powershell) -You can use PowerShell to work with alert processing rules. See the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement) for detailed documentation and examples. -+You can use PowerShell to work with alert processing rules. For detailed documentation and examples, see the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement). ### Create an alert processing rule using PowerShell -Use the `Set-AzAlertProcessingRule` command to create alert processing rules. -For example, to create a rule that adds an action group to all alerts in a subscription, run: +Use the `Set-AzAlertProcessingRule` command to create alert processing rules. For example, to create a rule that adds an action group to all alerts in a subscription, run: ```powershell Set-AzAlertProcessingRule ` Set-AzAlertProcessingRule ` -Description "Add action group AG1 to all alerts in the subscription" ``` -The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter. +The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) includes more examples and an explanation of each parameter. * * * -## Managing alert processing rules +## Manage alert processing rules ### [Portal](#tab/portal) You can view and manage your alert processing rules from the list view: -From here, you can enable, disable, or delete alert processing rules at scale by selecting the check box next to them. Clicking on an alert processing rule will open it for editing - you can enable or disable the rule in the fourth tab (**Details**). +From here, you can enable, disable, or delete alert processing rules at scale by selecting the checkboxes next to them. Selecting an alert processing rule opens it for editing. You can enable or disable the rule on the **Details** tab. ### [Azure CLI](#tab/azure-cli) -You can view and manage your alert processing rules using the [az monitor alert-processing-rules](/cli/azure/monitor/alert-processing-rule) commands from Azure CLI. +You can view and manage your alert processing rules by using the [az monitor alert-processing-rules](/cli/azure/monitor/alert-processing-rule) commands from Azure CLI. -Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule). +Before you manage alert processing rules with the Azure CLI, prepare your environment by using the instructions provided in [Configure an alert processing rule](#configure-an-alert-processing-rule). ```azurecli # List all alert processing rules for a subscription az monitor alert-processing-rules delete --resource-group RG1 --name MyRule ### [PowerShell](#tab/powershell) -You can view and manage your alert processing rules using the [\*-AzAlertProcessingRule](/powershell/module/az.alertsmanagement) commands from Azure CLI. +You can view and manage your alert processing rules by using the [\*-AzAlertProcessingRule](/powershell/module/az.alertsmanagement) commands from the Azure CLI. -Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule). +Before you manage alert processing rules with the Azure CLI, prepare your environment by following the instructions in [Configure an alert processing rule](#configure-an-alert-processing-rule). ```powershell # List all alert processing rules for a subscription Remove-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule ## Next steps -- [Learn more about alerts in Azure](./alerts-overview.md)+[Learn more about alerts in Azure](./alerts-overview.md) |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | The preceding steps are enough to help you start collecting server-side telemetr 1. In `_ViewImports.cshtml`, add injection: -```cshtml - @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet -``` + ```cshtml + @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet + ``` 2. In `_Layout.cshtml`, insert `HtmlHelper` at the end of the `<head>` section but before any other script. If you want to report any custom JavaScript telemetry from the page, inject it after this snippet: -```cshtml - @Html.Raw(JavaScriptSnippet.FullScript) - </head> -``` + ```cshtml + @Html.Raw(JavaScriptSnippet.FullScript) + </head> + ``` As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy: The `.cshtml` file names referenced earlier are from a default MVC application t If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the snippet to multiple pages, but we don't recommend it. > [!NOTE]-> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk). +> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk). ## Configure the Application Insights SDK Application Insights automatically collects telemetry about specific workloads w By default, the following automatic-collection modules are enabled. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior. -* `RequestTrackingTelemetryModule` - Collects RequestTelemetry from incoming web requests -* `DependencyTrackingTelemetryModule` - Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing http calls and sql calls -* `PerformanceCollectorModule` - Collects Windows PerformanceCounters -* `QuickPulseTelemetryModule` - Collects telemetry for showing in Live Metrics portal -* `AppServicesHeartbeatTelemetryModule` - Collects heart beats (which are sent as custom metrics), about Azure App Service environment where application is hosted -* `AzureInstanceMetadataTelemetryModule` - Collects heart beats (which are sent as custom metrics), about Azure VM environment where application is hosted -* `EventCounterCollectionModule` - Collects [EventCounters](eventcounters.md); this module is a new feature and is available in SDK version 2.8.0 and later +* `RequestTrackingTelemetryModule`: Collects RequestTelemetry from incoming web requests +* `DependencyTrackingTelemetryModule`: Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing http calls and sql calls +* `PerformanceCollectorModule`: Collects Windows PerformanceCounters +* `QuickPulseTelemetryModule`: Collects telemetry for showing in Live Metrics portal +* `AppServicesHeartbeatTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure App Service environment where application is hosted +* `AzureInstanceMetadataTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure VM environment where application is hosted +* `EventCounterCollectionModule`: Collects [EventCounters](eventcounters.md); this module is a new feature and is available in SDK version 2.8.0 and later To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example. If you want to disable telemetry conditionally and dynamically, you can resolve } ``` -The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [remove the telemetry module](#configuring-or-removing-default-telemetrymodules). +The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [Remove the telemetry module](#configuring-or-removing-default-telemetrymodules). ## Frequently asked questions For more information about custom data reporting in Application Insights, see [A ### How do I customize ILogger logs collection? -By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown below. -The following configuration allows ApplicationInsights to capture all `Information` logs and more severe logs. +By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown in the following code. +The following configuration allows Application Insights to capture all `Information` logs and more severe logs. ```json { The following configuration allows ApplicationInsights to capture all `Informati } ``` -It's important to note that the following example doesn't cause the ApplicationInsights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. ApplicationInsights requires an explicit override. +It's important to note that the following example doesn't cause the Application Insights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. Application Insights requires an explicit override. ```json { If the SDK is installed at build time as shown in this article, you don't need t Yes. Feature support for the SDK is the same in all platforms, with the following exceptions: -* The SDK collects [Event Counters](./eventcounters.md) on Linux because [Performance Counters](./performance-counters.md) are only supported in Windows. Most metrics are the same. +* The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same. * Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel: -```csharp -using Microsoft.ApplicationInsights.Channel; -using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; -- public void ConfigureServices(IServiceCollection services) - { - // The following will configure the channel to use the given folder to temporarily - // store telemetry items during network or Application Insights server issues. - // User should ensure that the given folder already exists - // and that the application has read/write permissions. - services.AddSingleton(typeof(ITelemetryChannel), - new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); - services.AddApplicationInsightsTelemetry(); - } -``` + ```csharp + using Microsoft.ApplicationInsights.Channel; + using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel; ++ public void ConfigureServices(IServiceCollection services) + { + // The following will configure the channel to use the given folder to temporarily + // store telemetry items during network or Application Insights server issues. + // User should ensure that the given folder already exists + // and that the application has read/write permissions. + services.AddSingleton(typeof(ITelemetryChannel), + new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"}); + services.AddApplicationInsightsTelemetry(); + } + ``` This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later. ### Is this SDK supported for the new .NET Core 3.X Worker Service template applications? -This SDK requires `HttpContext`; therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md). +This SDK requires `HttpContext`. Therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md). ## Open-source SDK For the latest updates and bug fixes, see the [release notes](./release-notes.md * [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.-* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection) +* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection) |
azure-monitor | Create Workspace Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md | az monitor app-insights component create --app demoApp --location eastus --kind For the full Azure CLI documentation for this command, consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create). -### Azure PowerShell +### Azure PowerShell +Create a new workspace-based Application Insights resource ++```powershell +New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location <String> -WorkspaceResourceId <String> + [-SubscriptionId <String>] + [-ApplicationType <ApplicationType>] + [-DisableIPMasking] + [-DisableLocalAuth] + [-Etag <String>] + [-FlowType <FlowType>] + [-ForceCustomerStorageForProfiler] + [-HockeyAppId <String>] + [-ImmediatePurgeDataOn30Day] + [-IngestionMode <IngestionMode>] + [-Kind <String>] + [-PublicNetworkAccessForIngestion <PublicNetworkAccessType>] + [-PublicNetworkAccessForQuery <PublicNetworkAccessType>] + [-RequestSource <RequestSource>] + [-RetentionInDays <Int32>] + [-SamplingPercentage <Double>] + [-Tag <Hashtable>] + [-DefaultProfile <PSObject>] + [-Confirm] + [-WhatIf] + [<CommonParameters>] +``` ++#### Example ++```powershell +New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus -WorkspaceResourceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" +``` ++For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key consult the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights). -The `New-AzApplicationInsights` PowerShell command does not currently support creating a workspace-based Application Insights resource. To create a workspace-based resource with PowerShell, you can use the Azure Resource Manager templates below and deploy with PowerShell. ### Azure Resource Manager templates + To create a workspace-based resource, you can use the Azure Resource Manager templates below and deploy with PowerShell. + #### Template file ```json |
azure-monitor | Distributed Tracing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md | Title: Distributed Tracing in Azure Application Insights | Microsoft Docs -description: Provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project + Title: Distributed tracing in Azure Application Insights | Microsoft Docs +description: This article provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project. Last updated 09/17/2018 -# What is Distributed Tracing? +# What is distributed tracing? -The advent of modern cloud and [microservices](https://azure.com/microservices) architectures has given rise to simple, independently deployable services that can help reduce costs while increasing availability and throughput. But while these movements have made individual services easier to understand as a whole, they've made overall systems more difficult to reason about and debug. +The advent of modern cloud and [microservices](https://azure.com/microservices) architectures has given rise to simple, independently deployable services that can help reduce costs while increasing availability and throughput. These movements have made individual services easier to understand. But they've also made overall systems more difficult to reason about and debug. -In monolithic architectures, we've gotten used to debugging with call stacks. Call stacks are brilliant tools for showing the flow of execution (Method A called Method B, which called Method C), along with details and parameters about each of those calls. This is great for monoliths or services running on a single process, but how do we debug when the call is across a process boundary, not simply a reference on the local stack? +In monolithic architectures, we've gotten used to debugging with call stacks. Call stacks are brilliant tools for showing the flow of execution (Method A called Method B, which called Method C), along with details and parameters about each of those calls. This technique is great for monoliths or services running on a single process. But how do we debug when the call is across a process boundary, not simply a reference on the local stack? -That's where distributed tracing comes in. +That's where distributed tracing comes in. -Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences for consuming distributed trace data. The first is our [transaction diagnostics](./transaction-diagnostics.md) view, which is like a call stack with a time dimension added in. The transaction diagnostics view provides visibility into one single transaction/request, and is helpful for finding the root cause of reliability issues and performance bottlenecks on a per request basis. +Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences for consuming distributed trace data. The first is our [transaction diagnostics](./transaction-diagnostics.md) view, which is like a call stack with a time dimension added in. The transaction diagnostics view provides visibility into one single transaction/request. It's helpful for finding the root cause of reliability issues and performance bottlenecks on a per-request basis. -Azure Monitor also offers an [application map](./app-map.md) view which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are. +Azure Monitor also offers an [application map](./app-map.md) view, which aggregates many transactions to show a topological view of how the systems interact. The map view also shows what the average performance and error rates are. -## How to Enable Distributed Tracing +## Enable distributed tracing Enabling distributed tracing across the services in an application is as simple as adding the proper agent, SDK, or library to each service, based on the language the service was implemented in. -## Enabling via Application Insights through auto-instrumentation or SDKs +## Enable via Application Insights through auto-instrumentation or SDKs -The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available below: +The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available for: * [.NET](asp-net.md) * [.NET Core](asp-net-core.md) The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js, * [JavaScript](./javascript.md#enable-distributed-tracing) * [Python](opencensus-python.md) -With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in [the Dependency auto-collection documentation](./auto-collect-dependencies.md). +With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](./auto-collect-dependencies.md). - Additionally, any technology can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md). + Any technology also can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md). ## Enable via OpenTelemetry -Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include Distributed Tracing. However, our Java OpenTelemetry-based Azure Monitor offering is GA and fully supported. +Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially, the OpenTelemetry community took on distributed tracing. Metrics and logs are still in progress. -The following pages consist of language-by-language guidance to enable and configure MicrosoftΓÇÖs OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project. +A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include distributed tracing. Our Java OpenTelemetry-based Azure Monitor offering is generally available and fully supported. ++The following pages consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project. * [.NET](opentelemetry-enable.md?tabs=net) * [Java](java-in-process-agent.md) The following pages consist of language-by-language guidance to enable and confi ## Enable via OpenCensus -In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/). +In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/). -[Python](opencensus-python.md) +For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md). -The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html) and [Go](https://godoc.org/go.opencensus.io), as well as various different guides for using OpenCensus. +The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus. ## Next steps * [OpenCensus Python usage guide](https://opencensus.io/api/python/trace/usage.html) * [Application map](./app-map.md) * [End-to-end performance monitoring](../app/tutorial-performance.md)- |
azure-monitor | Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md | Title: Azure Application Insights for JavaScript web apps -description: Get page view and session counts, web client data, Single Page Applications (SPA), and track usage patterns. Detect exceptions and performance issues in JavaScript web pages. +description: Get page view and session counts, web client data, and single-page applications and track usage patterns. Detect exceptions and performance issues in JavaScript webpages. Last updated 08/06/2020 ms.devlang: javascript-# Application Insights for web pages +# Application Insights for webpages > [!NOTE]-> We continue to assess the viability of OpenTelemetry for browser scenarios. The Application Insights JavaScript SDK is recommended for the forseeable future, which is fully compatible with OpenTelemetry distributed tracing. +> We continue to assess the viability of OpenTelemetry for browser scenarios. We recommend the Application Insights JavaScript SDK for the forseeable future. It's fully compatible with OpenTelemetry distributed tracing. -Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All of this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used. +Find out about the performance and usage of your webpage or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures. You also get user and session counts. All this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. By inserting trace calls in your JavaScript code, you can track how the different features of your webpage application are used. -Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance. +Application Insights can be used with any webpages by adding a short piece of JavaScript. Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance. -## Adding the JavaScript SDK +## Add the JavaScript SDK -1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the [create a new resource instructions](create-new-resource.md). -2. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK. -3. Add the Application Insights JavaScript SDK to your web page or app via one of the following two options: - * [npm Setup](#npm-based-setup) - * [JavaScript Snippet](#snippet-based-setup) +1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the instructions to [create a new resource](create-new-resource.md). +1. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1). You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK. +1. Add the Application Insights JavaScript SDK to your webpage or app via one of the following two options: + * [Node Package Manager (npm) setup](#npm-based-setup) + * [JavaScript snippet](#snippet-based-setup) > [!WARNING]-> `@microsoft/applicationinsights-web-basic - AISKULight` does not support the use of connection strings. +> `@microsoft/applicationinsights-web-basic - AISKULight` doesn't support the use of connection strings. -> [!IMPORTANT] -> Only use one method to add the JavaScript SDK to your application. If you use the NPM Setup, don't use the Snippet and vice versa. +Only use one method to add the JavaScript SDK to your application. If you use the npm setup, don't use the snippet and vice versa. > [!NOTE]-> NPM Setup installs the JavaScript SDK as a dependency to your project, enabling IntelliSense, whereas the Snippet fetches the SDK at runtime. Both support the same features. However, developers who desire more custom events and configuration generally opt for NPM Setup whereas users looking for quick enablement of out-of-the-box web analytics opt for the Snippet. +> The npm setup installs the JavaScript SDK as a dependency to your project and enables IntelliSense. The snippet fetches the SDK at runtime. Both support the same features. Developers who want more custom events and configuration generally opt for the npm setup. Users who are looking for quick enablement of out-of-the-box web analytics opt for the snippet. -### npm based setup +### npm-based setup -Install via Node Package Manager (npm). +Install via npm. ```sh npm i --save @microsoft/applicationinsights-web ``` > [!Note]-> **Typings are included with this package**, so you do **not** need to install a separate typings package. +> *Typings are included with this package*, so you do *not* need to install a separate typings package. ```js import { ApplicationInsights } from '@microsoft/applicationinsights-web' appInsights.loadAppInsights(); appInsights.trackPageView(); // Manually call trackPageView to establish the current user/session/pageview ``` -### Snippet based setup +### Snippet-based setup -If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section. +If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each of your pages. Preferably, it should be the first script in your `<head>` section. That way it can monitor any potential issues with all your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section. -Starting from version 2.5.5, the page view event will include a new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using. +Starting from version 2.5.5, the page view event will include the new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using. -The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318). +The current snippet that follows is version "5." The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318). ```html <script type="text/javascript"> cfg: { // Application Insights Configuration ``` > [!NOTE]-> For readability and to reduce possible JavaScript errors, all of the possible configuration options are listed on a new line in snippet code above, if you don't want to change the value of a commented line it can be removed. +> For readability and to reduce possible JavaScript errors, all the possible configuration options are listed on a new line in the preceding snippet code. If you don't want to change the value of a commented line, it can be removed. +#### Report script load failures -#### Reporting Script load failures +This version of the snippet detects and reports failures when the SDK is loaded from the CDN as an exception to the Azure Monitor portal (under the failures > exceptions > browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you've lost telemetry because the SDK didn't load or initialize, which can lead to: -This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures > exceptions > browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to: -- Under-reporting of how users are using (or trying to use) your site;-- Missing telemetry on how your end users are using your site;-- Missing JavaScript errors that could potentially be blocking your end users from successfully using your site.+- Underreporting of how users are using or trying to use your site. +- Missing telemetry on how your users are using your site. +- Missing JavaScript errors that could potentially be blocking your users from successfully using your site. -For details on this exception see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page. +For information on this exception, see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page. -Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled. +Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the Application Insights configuration. For this reason, if this failure occurs, it will always be reported by the snippet, even when `window.onerror` support is disabled. -Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point. +Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively Internet Explorer 8 or less. If you have this requirement and you want to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```. Use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point. > [!NOTE]-> If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues. +> If you're using a previous version of the snippet, update to the latest version so that you'll receive these previously unreported issues. #### Snippet configuration options -All configuration options have been moved towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure. +All configuration options have been moved toward the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure. -Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page. +Each configuration option is shown above on a new line. If you don't want to override the default value of an item listed as [optional], you can remove that line to minimize the resulting size of your returned page. ++The available configuration options are listed in this table. -The available configuration options are - | Name | Type | Description |||--| src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added <script /> tag. You can use the public CDN location or your own privately hosted one. -| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods. -| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms and any negative value will immediately add a script tag to the <head> region of the page, which will then block the page load event until to script is loaded (or fails). -| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fallback to XHR, setting this value to true just bypasses the fetch check. Use of this value is only be required if your application is being used in an environment where fetch would fail to send the failure events. -| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation) -| cfg | object **[required]** | The configuration passed to the Application Insights SDK during initialization. +| src | string *[required]* | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added <script /> tag. You can use the public CDN location or your own privately hosted one. +| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. If you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK), this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods. +| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms. Any negative value will immediately add a script tag to the <head> region of the page. The page load event is then blocked until the script is loaded or fails. +| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fall back to XHR. Setting this value to true just bypasses the fetch check. Use of this value is only required if your application is being used in an environment where fetch would fail to send the failure events. +| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default), no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous." For all valid values, see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation. +| cfg | object *[required]* | The configuration passed to the Application Insights SDK during initialization. -### Connection String Setup +### Connection string setup [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] appInsights.loadAppInsights(); appInsights.trackPageView(); ``` -### Sending telemetry to the Azure portal +### Send telemetry to the Azure portal -By default, the Application Insights JavaScript SDK auto-collects many telemetry items that are helpful in determining the health of your application and the underlying user experience. +By default, the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience. This telemetry includes: -- **Uncaught exceptions** in your app, including information on- - Stack trace - - Exception details and message accompanying the error - - Line & column number of error - - URL where error was raised -- **Network Dependency Requests** made by your app **XHR** and **Fetch** (fetch collection is disabled by default) requests, include information on- - Url of dependency source - - Command & Method used to request the dependency - - Duration of the request - - Result code and success status of the request - - ID (if any) of user making the request - - Correlation context (if any) where request is made -- **User information** (for example, Location, network, IP)-- **Device information** (for example, Browser, OS, version, language, model)+- **Uncaught exceptions** in your app, including information on the: + - Stack trace. + - Exception details and message accompanying the error. + - Line and column number of the error. + - URL where the error was raised. +- **Network Dependency Requests** made by your app **XHR** and **Fetch** (fetch collection is disabled by default) requests include information on the: + - URL of dependency source. + - Command and method used to request the dependency. + - Duration of the request. + - Result code and success status of the request. + - ID (if any) of the user making the request. + - Correlation context (if any) where the request is made. +- **User information** (for example, location, network, IP) +- **Device information** (for example, browser, OS, version, language, model) - **Session information** ### Telemetry initializers-Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they're executed in order of adding them. -The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item isn't sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint. +Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance. They're executed in the order of adding them. ++The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns `boolean` or `void`. If `false` is returned, the telemetry item isn't sent, or else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint. An example of using telemetry initializers:+ ```ts var telemetryInitializer = (envelope) => { envelope.data.someField = 'This item passed through my telemetry initializer'; appInsights.trackTrace({message: 'this message will not be sent'}); // Not sent ``` ## Configuration-Most configuration fields are named such that they can be defaulted to false. All fields are optional except for `connectionString`. ++Most configuration fields are named so that they can default to false. All fields are optional except for `connectionString`. | Name | Description | Default | ||-||-| connectionString | **Required**<br>Connection string that you obtained from the Azure portal. | string<br/>null | -| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null | +| connectionString | *Required*<br>Connection string that you obtained from the Azure portal. | string<br/>null | +| accountId | An optional account ID if your app groups users into accounts. No spaces, commas, semicolons, equal signs, or vertical bars. | string<br/>null | | sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) | | sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |-| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | numeric<br/>10000 | -| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 | +| maxBatchSizeInBytes | Maximum size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started. | numeric<br/>10000 | +| maxBatchInterval | How long to batch telemetry before sending (milliseconds). | numeric<br/>15000 | | disable​ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false | | disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false | -| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | -| loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | -| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 | -| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 | -| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false | +| enableDebug | If true, *internal* debugging data is thrown as an exception *instead* of being logged, regardless of SDK logging settings. Default is false. <br>*Note:* Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false | +| loggingLevelConsole | Logs *internal* Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 | +| loggingLevelTelemetry | Sends *internal* Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | +| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue. | numeric<br/> 10000 | +| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you want to preserve your data cap for large-scale applications. | numeric<br/>100 | +| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (Internet Explorer 8 or less). Default is false. | boolean<br/>false | | disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false | | disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |-| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/> -| maxAjaxCallsPerView | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 | +| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated by using the navigation timing API. |boolean<br/> +| maxAjaxCallsPerView | Default 500 controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 | | disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |-| disable​CorrelationHeaders | If false, the SDK will add two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false | -| correlationHeader​ExcludedDomains | Disable correlation headers for specific domains | string[]<br/>undefined | -| correlationHeader​ExcludePatterns | Disable correlation headers using regular expressions | regex[]<br/>undefined | -| correlationHeader​Domains | Enable correlation headers for specific domains | string[]<br/>undefined | -| disableFlush​OnBeforeUnload | If true, flush method won't be called when onBeforeUnload event triggers | boolean<br/> false | -| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true | -| cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined | -| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false | -| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | -| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | -| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false | +| disable​CorrelationHeaders | If false, the SDK will add two headers (`Request-Id` and `Request-Context`) to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false | +| correlationHeader​ExcludedDomains | Disable correlation headers for specific domains. | string[]<br/>undefined | +| correlationHeader​ExcludePatterns | Disable correlation headers by using regular expressions. | regex[]<br/>undefined | +| correlationHeader​Domains | Enable correlation headers for specific domains. | string[]<br/>undefined | +| disableFlush​OnBeforeUnload | If true, flush method won't be called when `onBeforeUnload` event triggers. | boolean<br/> false | +| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load. | boolean<br />true | +| cookieCfg | Defaults to cookie usage enabled. For full defaults, see [ICookieCfgConfig](#icookiemgrconfig) settings. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined | +| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false | +| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined, it will take precedence over this value. | Alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null | +| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it will take precedence over this value. | Alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null | +| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected). | boolean<br/>false | | isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false |-| isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true | -| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/> false | -| sdkExtension | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). | string<br/> null | -| isBrowserLink​TrackingEnabled | If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false | -| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it canΓÇÖt be used automatically, but can be set manually in the configuration. |string<br/> null | -| enable​CorsCorrelation | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false | +| isBeaconApiDisabled | If false, the SDK will send all telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/>true | +| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/> false | +| sdkExtension | Sets the SDK extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the `ai.internal.sdkVersion` tag (for example, `ext_javascript:2.0.0`). | string<br/> null | +| isBrowserLink​TrackingEnabled | If true, the SDK will track all [browser link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false | +| appId | AppId is used for the correlation between AJAX dependencies happening on the client side with the server-side requests. When the Beacon API is enabled, it canΓÇÖt be used automatically but can be set manually in the configuration. |string<br/> null | +| enable​CorsCorrelation | If true, the SDK will add two headers (`Request-Id` and `Request-Context`) to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false | | namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined |-| enable​AutoRoute​Tracking | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false | -| enableRequest​HeaderTracking | If true, AJAX & Fetch request headers is tracked. | boolean<br/> false | -| enableResponse​HeaderTracking | If true, AJAX & Fetch request's response headers is tracked. | boolean<br/> false | -| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` | +| enable​AutoRoute​Tracking | Automatically track route changes in single-page applications. If true, each route change will send a new page view to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false | +| enableRequest​HeaderTracking | If true, AJAX and Fetch request headers are tracked. | boolean<br/> false | +| enableResponse​HeaderTracking | If true, AJAX and Fetch request response headers are tracked. | boolean<br/> false | +| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for backward compatibility with any legacy Application Insights instrumented services. See examples at [this website](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` | | enable​AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false | | enable​AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |-| maxAjaxPerf​LookupAttempts | The maximum number of times to look for the window.performance timings (if available). This option is sometimes required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 | -| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms | -| enableUnhandled​PromiseRejection​Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false | -| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false | -| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false <default>).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false | -| idLength | The default length used to generate new random session and user ID values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 | +| maxAjaxPerf​LookupAttempts | The maximum number of times to look for the window.performance timings, if available. This option is sometimes required because not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, this is added after it's complete.| numeric<br/> 3 | +| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request. Time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms | +| enableUnhandled​PromiseRejection​Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When `disableExceptionTracking` is true (don't track exceptions), the config value will be ignored, and unhandled promise rejections won't be reported. | boolean<br/> false | +| enablePerfMgr | When enabled (true), this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More information is available in the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false | +| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent(), this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for parent events (false <default>).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created, and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false | +| idLength | The default length used to generate new random session and user ID values. Defaults to 22. The previous default value was 5 (v2.5.8 or less). If you need to keep the previous maximum length, you should set this value to 5. | numeric<br />22 | -## Cookie Handling +## Cookie handling From version 2.6.0, cookie management is now available directly from the instance and can be disabled and re-enabled after initialization. If disabled during initialization via the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can now re-enable via the [ICookieMgr](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts) `setEnabled` function. -The instance based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. And to benefit from the tree-shaking enhancements also introduced as part of version 2.6.0 you should no longer uses the global functions. +The instance-based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. To benefit from the tree-shaking enhancements also introduced as part of version 2.6.0, you should no longer use the global functions. ### ICookieMgrConfig -Cookie Configuration for instance-based cookie management added in version 2.6.0. +Cookie configuration for instance-based cookie management added in version 2.6.0. -| Name | Description | Type and Default | +| Name | Description | Type and default | ||-||-| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true | -| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null | -| path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / | -| getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null | -| setCookie | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null | -| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it will use the internal cookie parsing / caching. | `(name: string, value: string) => void` <br/> null | +| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies. | boolean<br/> true | +| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided, uses the value from root `cookieDomain` value. | string<br/>null | +| path | Specifies the path to use for the cookie. If not provided, it will use any value from the root `cookiePath` value. | string <br/> / | +| getCookie | Function to fetch the named cookie value. If not provided, it will use the internal cookie parsing/caching. | `(name: string) => string` <br/> null | +| setCookie | Function to set the named cookie with the specified value. Only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null | +| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided, it will use the internal cookie parsing/caching. | `(name: string, value: string) => void` <br/> null | -### Simplified Usage of new instance Cookie Manager +### Simplified usage of new instance Cookie Manager - appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).setEnabled(true/false); - appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).set("MyCookie", "the%20encoded%20value"); Cookie Configuration for instance-based cookie management added in version 2.6.0 ## Enable time-on-page tracking -By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric". +By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new page view, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a log-based metric. -## Enable Distributed Tracing +## Enable distributed tracing Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools. -In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. The following examples show standard configuration options for enabling correlation. +In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. The following examples show standard configuration options for enabling correlation. -The following sample code shows the configurations required to enable correlation: +The following sample code shows the configurations required to enable correlation. # [Snippet](#tab/snippet) const appInsights = new ApplicationInsights({ config: { // Application Insights > [!NOTE]-> There are two distributed tracing modes/protocols - AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt-in to WC3 mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps). +> There are two distributed tracing modes/protocols: AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt in to WC3 mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps). ### Route tracking -By default, this SDK will **not** handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration. +By default, this SDK will *not* handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration. -### Single Page Applications +### Single-page applications -For Single Page Applications, reference plugin documentation for plugin specific guidance. +For single-page applications, reference plug-in documentation for guidance specific to plug-ins. -| Plugins | +| Plug-ins | || | [React](javascript-react-plugin.md#enable-correlation)| | [React Native](javascript-react-native-plugin.md#enable-correlation)| | [Angular](javascript-angular-plugin.md#enable-correlation)| | [Click Analytics Auto-collection](javascript-click-analytics-plugin.md#enable-correlation)| -### Advanced Correlation +### Advanced correlation -When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes. -To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the Operation ID from the `serverside` and populate the `clientside` with it. +When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes. +To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the operation ID from the `serverside` and populate the `clientside` with it. # [Snippet](#tab/snippet) -Here's a sample of how to create a dynamic JS using Razor: +Here's a sample of how to create a dynamic JavaScript using Razor. ```C# <script> Here's a sample of how to create a dynamic JS using Razor: }}); </script> ```+ # [npm](#tab/npm) ```js appInsights.context.telemetryContext.parentID = serverId; appInsights.loadAppInsights(); ``` -When using an npm based configuration, a location must be determined to store the Operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent. +When you use an npm-based configuration, a location must be determined to store the operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent. > [!CAUTION]->The application UX is not yet optimized to show these "first hop" advanced distributed tracing scenarios. However, the data will be available in the requests table for query and diagnostics. +>The application UX is not yet optimized to show these "first hop" advanced distributed tracing scenarios. The data will be available in the requests table for query and diagnostics. ## Extensions When using an npm based configuration, a location must be determined to store th ## Explore browser/client-side data -Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in: +Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in. - + -You can also view your data from the JavaScript SDK via the Browser experience in the portal. +You can also view your data from the JavaScript SDK via the browser experience in the portal. -Select **Browser** and then choose **Failures** or **Performance**. +Select **Browser**, and then select **Failures** or **Performance**. - + ### Performance - + ### Dependencies - + ### Analytics -To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded. - +To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK. Any server-side telemetry collected by other SDKs will be excluded. + ```kusto // average pageView duration by name let timeGrain=5m; dataset | render timechart ``` -### Source Map Support +### Source map support The minified callstack of your exception telemetry can be unminified in the Azure portal. All existing integrations on the Exception Details panel will work with the newly unminified callstack. -#### Link to Blob storage account +#### Link to Blob Storage account -You can link your Application Insights resource to your own Azure Blob Storage container to automatically unminify call stacks. To get started, see [automatic source map support](./source-map-support.md). +You can link your Application Insights resource to your own Azure Blob Storage container to automatically unminify call stacks. To get started, see [Automatic source map support](./source-map-support.md). ### Drag and drop -1. Select an Exception Telemetry item in the Azure portal to view its "End-to-end transaction details" -2. Identify which source maps correspond to this call stack. The source map must match a stack frame's source file, but suffixed with `.map` -3. Drag and drop the source maps onto the call stack in the Azure portal - +1. Select an Exception Telemetry item in the Azure portal to view its "end-to-end transaction details." +1. Identify which source maps correspond to this call stack. The source map must match a stack frame's source file but be suffixed with `.map`. +1. Drag the source maps onto the call stack in the Azure portal. ++  -### Application Insights Web Basic +### Application Insights web basic ++For a lightweight experience, you can instead install the basic version of Application Insights: -For a lightweight experience, you can instead install the basic version of Application Insights ``` npm i --save @microsoft/applicationinsights-web-basic ```-This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., aren't included in this version, so you'll need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here. ++This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection like uncaught exceptions and AJAX. The APIs to send certain telemetry types, like `trackTrace` and `trackException`, aren't included in this version. For this reason, you'll need to provide your own wrapper. The only API that's available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here. ## Examples -For runnable examples, see [Application Insights JavaScript SDK Samples](https://github.com/Azure-Samples?q=applicationinsights-js-demo). +For runnable examples, see [Application Insights JavaScript SDK samples](https://github.com/Azure-Samples?q=applicationinsights-js-demo). -## Upgrading from the old version of Application Insights +## Upgrade from the old version of Application Insights Breaking changes in the SDK V2 version:+ - To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported. - The telemetry envelope has field name and structure changes due to data schema updates. - Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`).- - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. - > [!NOTE] - > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID. + + To manually refresh the current pageview ID, for example, in single-page applications, use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. ++ > [!NOTE] + > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID. If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario. If you're using the current application insights PRODUCTION SDK (1.0.20) and wan }); ``` -Test in internal environment to verify monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK V2 version and deploy in your production environments. +Test in an internal environment to verify the monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK v2 and deploy in your production environments. ## SDK performance/overhead -At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. Minimal components of the library are quickly loaded when using this snippet. In the meantime, the full script is downloaded in the background. +At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of load time to your website. Minimal components of the library are quickly loaded when you use this snippet. In the meantime, the full script is downloaded in the background. -While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users. +While the script is downloading from the CDN, all tracking of your page is queued. After the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system that's invisible to your users. > Summary: > -  While the script is downloading from the CDN, all tracking of your page is queue | | | | | Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö | -## ES3/IE8 Compatibility +## ES3/Internet Explorer 8 compatibility -As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers canΓÇÖt control which browser their end users choose to use. +We need to ensure that this SDK continues to "work" and doesn't break the JavaScript execution when it's loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers can't control which browser their users choose to use. -This statement does NOT mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility and when adding new features, they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature. +This statement does *not* mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility. New features will need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature. -[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility) +See GitHub for full details on [Internet Explorer 8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility). ## Open-source SDK -The Application Insights JavaScript SDK is open-source to view the source code or to contribute to the project visit the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS). +The Application Insights JavaScript SDK is open source. To view the source code or to contribute to the project, see the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS). For the latest updates and bug fixes, [consult the release notes](./release-notes.md). ## Troubleshooting +This section helps you troubleshoot common issues. + ### I'm getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible -The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards. -An example would be `*.queue.core.windows.net`, as seen in the code sample above. -Adding the application domain to this property should be avoided as it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context` and `traceparent` headers as part of the request. +The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent because of third-party server configuration. This property supports wildcards. +An example would be `*.queue.core.windows.net`, as seen in the preceding code sample. +Adding the application domain to this property should be avoided because it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context`, and `traceparent` headers as part of the request. ### I'm not sure how to update my third-party server configuration -The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context` and `traceparent` (W3C distributed header). +The server side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server side, it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context`, and `traceparent` (W3C distributed header). Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>` ### I'm receiving duplicate telemetry data from the Application Insights JavaScript SDK -If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`. +If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when you use connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`. ## <a name="next"></a> Next steps+ * [Source map for JavaScript](source-map-support.md) * [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md) |
azure-monitor | Monitor Web App Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md | -The name *URL ping test* is a bit of a misnomer. These tests don't use Internet Control Message Protocol (ICMP) to check your site's availability. Instead, they use more advanced HTTP request functionality to validate whether an endpoint is responding. They measure the performance associated with that response. They also add the ability to set custom success criteria, coupled with more advanced features like parsing dependent requests and allowing for retries. +The name *URL ping test* is a bit of a misnomer. These tests don't use the Internet Control Message Protocol (ICMP) to check your site's availability. Instead, they use more advanced HTTP request functionality to validate whether an endpoint is responding. They measure the performance associated with that response. They also add the ability to set custom success criteria, coupled with more advanced features like parsing dependent requests and allowing for retries. -To create an availability test, you need use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md). +To create an availability test, you need to use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md). > [!NOTE] > URL ping tests are categorized as classic tests. You can find them under **Add Classic Test** on the **Availability** pane. For more advanced features, see [Standard tests](availability-standard-tests.md).- + ## Create a test To create your first availability request:-1. In your Application Insights resource, open the **Availability** pane and selectΓÇ» **Add Classic Test**. ++1. In your Application Insights resource, open the **Availability** pane and selectΓÇ»**Add Classic Test**. :::image type="content" source="./media/monitor-web-app-availability/create-test.png" alt-text="Screenshot that shows the Availability pane and the button for adding a classic test." lightbox ="./media/monitor-web-app-availability/create-test.png":::+ 1. Name your test and select **URL ping** for **SKU**. 1. Enter the URL that you want to test.-1. Adjust the settings (described in the following table) to your needs and select **Create**. +1. Adjust the settings to your needs by using the following table. Select **Create**. - |Setting| Explanation | + |Setting| Description | |-|-|- |**URL** | The URL can be any webpage that you want to test, but it must be visible from the public internet. The URL can include a query string. For example, you can exercise your database a little. If the URL resolves to a redirect, you can follow it up to 10 redirects.| - |**Parse dependent requests**| The test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option is not enabled, the test only requests the file at the URL that you specified. Enabling this option results in a stricter check. The test might fail for cases that aren't noticeable from manually browsing through the site. - |**Enable retries**|When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80 percent of failures disappear on retry.| - |**Test frequency**| This setting determines how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested every minute on average.| - |**Test locations**| The values for this setting are the places from which servers send web requests to your URL. *We recommend a minimum of five test locations*, to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations. + |URL |The URL can be any webpage that you want to test, but it must be visible from the public internet. The URL can include a query string. For example, you can exercise your database a little. If the URL resolves to a redirect, you can follow it up to 10 redirects.| + |Parse dependent requests| The test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't enabled, the test only requests the file at the URL that you specified. Enabling this option results in a stricter check. The test might fail for cases that aren't noticeable from manually browsing through the site. + |Enable retries|When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80 percent of failures disappear on retry.| + |Test frequency| This setting determines how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested every minute on average.| + |Test locations| The values for this setting are the places from which servers send web requests to your URL. *We recommend a minimum of 5 test locations* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations. If your URL isn't visible from the public internet, you can choose to selectively open your firewall to allow only the test transactions through. To learn more about the firewall exceptions for availability test agents, consult the [IP address guide](./ip-addresses.md#availability-tests). If your URL isn't visible from the public internet, you can choose to selectivel ## Success criteria -|Setting| Explanation | +|Setting| Description | |-|-|-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site have not been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.| -| **HTTP response** | The returned status code that's counted as a success. The code that indicates that a normal webpage has been returned is 200.| -| **Content match** | We test that an exact case-sensitive match for a string occurs in every response. It must be a plain string, without wildcards (like "Welcome!"). Don't forget that if your page content changes, you might have to update it. *Content match supports only English characters.* | +| Test timeout |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.| +| HTTP response | The returned status code that's counted as a success. The code that indicates that a normal webpage has been returned is 200.| +| Content match | We test that an exact case-sensitive match for a string occurs in every response. It must be a plain string, without wildcards (like "Welcome!"). Don't forget that if your page content changes, you might have to update it. *Content match supports only English characters.* | ## Alerts -|Setting| Explanation | +|Setting| Description | |-|-|-|**Near-realtime (Preview)** | We recommend using alerts that work in near real time. You configure this type of alert after you create your availability test. | -|**Alert location threshold**| The optimal relationship between alert location threshold and the number of test locations is *alert location threshold = number of test locations - 2*, with a minimum of five test locations.| +|Near real time (preview) | We recommend using alerts that work in near real time. You configure this type of alert after you create your availability test. | +|Alert location threshold| The optimal relationship between alert location threshold and the number of test locations is *alert location threshold = number of test locations - 2*, with a minimum of five test locations.| ## Location population tags You might want to disable availability tests or the alert rules associated with Select a red dot. From an availability test result, you can see the transaction details across all components. You can then: In addition to the raw results, you can view two key availability metrics in [Me * [Use PowerShell scripts to set up an availability test](./powershell.md#add-an-availability-test) automatically. * Set up a [webhook](../alerts/alerts-webhooks.md) that's called when an alert is raised. - ## Next steps * [Availability alerts](availability-alerts.md) |
azure-monitor | Autoscale Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md | Azure autoscale supports many resource types. For more information about support > [!NOTE] > [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) for faster and more reliable autoscale support. - ## What is autoscale-Autoscale is a service that allows you to automatically add and remove resources according to the load on your application. ++Autoscale is a service that allows you to automatically add and remove resources according to the load on your application. When your application experiences higher load, autoscale adds resources to handle the increased load. When load is low, autoscale reduces the number of resources, lowering your costs. You can scale your application based on metrics like CPU usage, queue length, and available memory, or based on a schedule. Metrics and schedules are set up in rules. The rules include a minimum level of resources that you need to run your application, and a maximum level of resources that won't be exceeded. For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back in removing VMs when CPU usage drops to 40%. -  ++When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems. ++## Scaling out and scaling up ++Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load. ++In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant, but gives those resources more capacity in terms of memory, CPU speed, disk space and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process. + -When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems.. ### Predictive autoscale (preview)+ [Predictive autoscale](/azure/azure-monitor/autoscale/autoscale-predictive) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.+ ## Autoscale setup+ You can set up autoscale via:-* [Azure portal](autoscale-get-started.md) -* [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings) -* [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale) -* [Azure Monitor REST API](/rest/api/monitor/autoscalesettings) +++ [Azure portal](autoscale-get-started.md)++ [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)++ [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)++ [Azure Monitor REST API](/rest/api/monitor/autoscalesettings) ## Architecture+ The following diagram shows the autoscale architecture.  ### Resource metrics-Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. ++Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure. Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics. ### Custom metrics+ Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](/azure/azure-monitor/app/app-insights-overview) so you can use those metrics decide when to scale. ### Time-Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs. - ++Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs. ### Rules+ Rules define the conditions needed to trigger a scale event, the direction of the scaling, and the amount to scale by. Rules can be:-* Metric-based -Trigger based on a metric value, for example when CPU usage is above 50%. -* Time-based -Trigger based on a schedule, for example, every Saturday at 8am. ++ Metric-based + Trigger based on a metric value, for example when CPU usage is above 50%. ++ Time-based + Trigger based on a schedule, for example, every Saturday at 8am. You can combine multiple rules using different metrics, for example CPU usage and queue length. -* The OR operator is used when scaling out with multiple rules. -* The AND operator is used when scaling in with multiple rules. +++ The OR operator is used when scaling out with multiple rules.++ The AND operator is used when scaling in with multiple rules. ### Actions and automation+ Rules can trigger one or more actions. Actions include: -- Scale - Scale resources in or out.-- Email - Send an email to the subscription admins, co-admins, and/or any other email address.-- Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:- + Start an [Azure Automation runbook](/azure/automation/overview). - + Call an [Azure Function](/azure/azure-functions/functions-overview). - + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview). ++ Scale - Scale resources in or out.++ Email - Send an email to the subscription admins, co-admins, and/or any other email address.++ Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:+ + Start an [Azure Automation runbook](/azure/automation/overview). + + Call an [Azure Function](/azure/azure-functions/functions-overview). + + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview). + ## Autoscale settings Autoscale settings contain the autoscale configuration. The setting including scale conditions that define rules, limits, and schedules and notifications. Define one or more scale conditions in the settings, and one notification setup. Autoscale uses the following terminology and structure. The UI and JSON | UI | JSON/CLI | Description | ||--|-| | Scale conditions | profiles | A collection of rules, instance limits and schedules, based on a metric or time. You can define one or more scale conditions or profiles. |-| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale in and scale out actions. | +| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale-in and scale-out actions. | | Instance limits | capacity | Each scale condition or profile defines th default, max, and min number of instances that can run under that profile. | | Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day, or days of the week. | | Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. | The full list of configurable fields and descriptions is available in the [Autos For code examples, see -* [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) -* [Autoscale REST API](/rest/api/monitor/autoscalesettings) ++ [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) ++ [Autoscale REST API](/rest/api/monitor/autoscalesettings) ## Horizontal vs vertical scaling-Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load. --In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process. +Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load. +In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process. ## Supported services for autoscale+ The following services are supported by autoscale: | Service | Schema & Documentation | | | |-| Web Apps |[Scaling Web Apps](autoscale-get-started.md) | -| Cloud Services |[Autoscale a Cloud Service](../../cloud-services/cloud-services-how-to-scale-portal.md) | -| Virtual Machines: Windows scale sets |[Scaling virtual machine scale sets in Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) | -| Virtual Machines: Linux scale sets |[Scaling virtual machine scale sets in Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) | -| Virtual Machines: Windows Example |[Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) | -| Azure App Service |[Scale up an app in Azure App service](../../app-service/manage-scale-up.md)| -| API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) +| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview) | +| Web apps |[Scaling Web Apps](autoscale-get-started.md) | +| Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|-| Logic Apps |[Adding integration service environment (ISE) capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity)| -| Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)| -| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)| +| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) | +| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) | | Azure SignalR Service | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) |+| Logic apps |[Adding integration service environment (ISE) capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity)| | Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |-| Logic Apps - Integration Service Environment(ISE) | [Add ISE Environment](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) | -| Azure App Service Environment | [Autoscaling and App Service Environment v1](../../app-service/environment/app-service-environment-auto-scale.md) | +| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)| +| Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)| | Service Fabric Managed Clusters | [Introduction to Autoscaling on Service Fabric managed clusters](../../service-fabric/how-to-managed-cluster-autoscale.md) |-| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) | -| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) | - ## Next steps+ To learn more about autoscale, see the following resources: -* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md) -* [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json) -* [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json) -* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md) -* [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md) -* [Autoscale REST API](/rest/api/monitor/autoscalesettings) -* [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md) -* [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot) ++ [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)++ [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)++ [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)++ [Best practices for Azure Monitor autoscale](autoscale-best-practices.md)++ [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)++ [Autoscale REST API](/rest/api/monitor/autoscalesettings)++ [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)++ [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot) |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## July 2022 +* [Azure Application Consistent Snapshot Tool (AzAcSnap) 6](azacsnap-release-notes.md) + + [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments. With AzAcSnap 6, there is a new [release model](azacsnap-release-notes.md). AzAcSnap 6 also introduces the following new capabilities: ++ Now generally available: + * Oracle Database support + * Backint integration to work with Azure Backup + * [RunBefore and RunAfter](azacsnap-cmd-ref-runbefore-runafter.md) CLI options to execute custom shell scripts and commands before or after taking storage snapshots ++ In preview: + * Azure Key Vault to store Service Principal content + * Azure Managed Disk as an alternate storage back end + * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores. * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview) |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-resource-manager | Publish Service Catalog App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md | Title: Publish service catalog managed app -description: Shows how to create an Azure managed application that is intended for members of your organization. + Title: Publish Azure Managed Application in service catalog +description: Describes how to publish an Azure Managed Application in your service catalog that's intended for members of your organization. + Previously updated : 07/08/2022- Last updated : 08/16/2022 -# Quickstart: Create and publish a managed application definition +# Quickstart: Create and publish an Azure Managed Application definition -This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You can create and publish a managed application that's intended for members of your organization. +This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You create and publish a managed application that's stored in your service catalog and is intended for members of your organization. -To publish a managed application to your service catalog, you must: +To publish a managed application to your service catalog, do the following tasks: - Create an Azure Resource Manager template (ARM template) that defines the resources to deploy with the managed application. - Define the user interface elements for the portal when deploying the managed application.-- Create a _.zip_ package that contains the required template files.+- Create a _.zip_ package that contains the required template files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition. - Decide which user, group, or application needs access to the resource group in the user's subscription. - Create the managed application definition that points to the _.zip_ package and requests access for the identity. +**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition). + > [!NOTE] > Bicep files can't be used in a managed application. You must convert a Bicep file to ARM template JSON with the Bicep [build](../bicep/bicep-cli.md#build) command. +## Prerequisites ++To complete this quickstart, you need the following items: ++- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin. +- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). +- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli). + ## Create the ARM template -Every managed application definition includes a file named _mainTemplate.json_. In it, you define the Azure resources to deploy. The template is no different than a regular ARM template. +Every managed application definition includes a file named _mainTemplate.json_. The template defines the Azure resources to deploy and is no different than a regular ARM template. -Create a file named _mainTemplate.json_. The name is case-sensitive. +Open Visual Studio Code, create a file with the case-sensitive name _mainTemplate.json_ and save it. Add the following JSON and save the file. It defines the parameters for creating a storage account, and specifies the properties for the storage account. Add the following JSON and save the file. It defines the parameters for creating "contentVersion": "1.0.0.0", "parameters": { "storageAccountNamePrefix": {- "type": "string" + "type": "string", + "maxLength": 11, + "metadata": { + "description": "Storage prefix must be maximum of 11 characters with only lowercase letters or numbers." + } }, "storageAccountType": { "type": "string" Add the following JSON and save the file. It defines the parameters for creating As a publisher, you define the portal experience for creating the managed application. The _createUiDefinition.json_ file generates the portal interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) including drop-downs, text boxes, and password boxes. -Create a file named _createUiDefinition.json_ (This name is case-sensitive) +Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it. -Add the following starter JSON to the file and save it. +Add the following JSON to the file and save it. ```json { To learn more, see [Get started with CreateUiDefinition](create-uidefinition-ove ## Package the files -Add the two files to a _.zip_ file named _app.zip_. The two files must be at the root level of the _.zip_ file. If you put them in a folder, you receive an error when creating the managed application definition that states the required files aren't present. +Add the two files to a file named _app.zip_. The two files must be at the root level of the _.zip_ file. If you put the files in a folder, you receive an error that states the required files aren't present when you create the managed application definition. -Upload the package to an accessible location from where it can be consumed. You'll need to provide a unique name for the storage account. +Upload the package to an accessible location from where it can be consumed. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name. # [PowerShell](#tab/azure-powershell) New-AzResourceGroup -Name storageGroup -Location eastus $storageAccount = New-AzStorageAccount ` -ResourceGroupName storageGroup `- -Name "mystorageaccount" ` + -Name "demostorageaccount" ` -Location eastus ` -SkuName Standard_LRS ` -Kind StorageV2 Set-AzStorageBlobContent ` az group create --name storageGroup --location eastus az storage account create \- --name mystorageaccount \ + --name demostorageaccount \ --resource-group storageGroup \ --location eastus \ --sku Standard_LRS \ --kind StorageV2 az storage container create \- --account-name mystorageaccount \ + --account-name demostorageaccount \ --name appcontainer \ --public-access blob az storage blob upload \- --account-name mystorageaccount \ + --account-name demostorageaccount \ --container-name appcontainer \ --name "app.zip" \- --file "D:\myapplications\app.zip" + --file "./app.zip" ``` +When you run the Azure CLI command to create the container, you might see a warning message about credentials, but the command will be successful. The reason is because although you own the storage account you assign roles like _Storage Blob Data Contributor_ to the storage account scope. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md). After you add a role, it takes a few minutes to become active in Azure. You can then append the command with `--auth-mode login` and resolve the warning message. + ## Create the managed application definition +In this section you'll get identity information from Azure Active Directory, create a resource group, and create the managed application definition. + ### Create an Azure Active Directory user group or application The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that is assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md). groupid=$(az ad group show --group mygroup --query id --output tsv) ### Get the role definition ID -Next, you need the role definition ID of the Azure built-in role you want to grant access to the user, user group, or application. Typically, you use the Owner or Contributor or Reader role. The following command shows how to get the role definition ID for the Owner role: +Next, you need the role definition ID of the Azure built-in role you want to grant access to the user, user group, or application. Typically, you use the Owner, Contributor, or Reader role. The following command shows how to get the role definition ID for the Owner role: # [PowerShell](#tab/azure-powershell) roleid=$(az role definition list --name Owner --query [].name --output tsv) ### Create the managed application definition -If you don't already have a resource group for storing your managed application definition, create one now: +If you don't already have a resource group for storing your managed application definition, create a new resource group. ++**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition). # [PowerShell](#tab/azure-powershell) az group create --name appDefinitionGroup --location westcentralus -Now, create the managed application definition resource. +Create the managed application definition resource. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name. # [PowerShell](#tab/azure-powershell) New-AzManagedApplicationDefinition ` # [Azure CLI](#tab/azure-cli) ```azurecli-interactive-blob=$(az storage blob url --account-name mystorageaccount --container-name appcontainer --name app.zip --output tsv) +blob=$(az storage blob url \ + --account-name demostorageaccount \ + --container-name appcontainer \ + --name app.zip --output tsv) az managedapp definition create \ --name "ManagedStorage" \ When the command completes, you have a managed application definition in your re Some of the parameters used in the preceding example are: - **resource group**: The name of the resource group where the managed application definition is created.-- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.+- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. When `ReadOnly` is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock. - **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group. - **Azure PowerShell**: `"${groupid}:$roleid"` or you can use curly braces for each variable `"${groupid}:${roleid}"`. Use a comma to separate multiple values: `"${groupid1}:$roleid1", "${groupid2}:$roleid2"`. - **Azure CLI**: `"$groupid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$groupid1:$roleid1" "$groupid2:$roleid2"`. -- **package file URI**: The location of a _.zip_ package that contains the required files.+- **package file URI**: The location of a _.zip_ package file that contains the required files. ## Bring your own storage for the managed application definition -As an alternative, you can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs. +This section is optional. You can store your managed application definition in your own storage account so that its location and access can be managed by you for your regulatory needs. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition. > [!NOTE] > Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition. -### Select your storage account +### Create your storage account -You must [create a storage account](../../storage/common/storage-account-create.md) to contain your managed application definition for use with Service Catalog. +You must create a storage account that will contain your managed application definition for use with a service catalog. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. -Copy the storage account's resource ID. It will be used later when deploying the definition. +This example creates a new resource group named `byosStorageRG`. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name. -### Set the role assignment for "Appliance Resource Provider" in your storage account +# [PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +New-AzResourceGroup -Name byosStorageRG -Location eastus ++New-AzStorageAccount ` + -ResourceGroupName byosStorageRG ` + -Name "definitionstorage" ` + -Location eastus ` + -SkuName Standard_LRS ` + -Kind StorageV2 +``` ++Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use this variable when you deploy the managed application definition. ++```azurepowershell-interactive +$storageId = (Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage).Id +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az group create --name byosStorageRG --location eastus ++az storage account create \ + --name definitionstorage \ + --resource-group byosStorageRG \ + --location eastus \ + --sku Standard_LRS \ + --kind StorageV2 +``` ++Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use the variable's value when you deploy the managed application definition. ++```azurecli-interactive +storageId=$(az storage account show --resource-group byosStorageRG --name definitionstorage --query id) +``` ++++### Set the role assignment for your storage account Before your managed application definition can be deployed to your storage account, assign the **Contributor** role to the **Appliance Resource Provider** user at the storage account scope. This assignment lets the identity write definition files to your storage account's container. -For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). +# [PowerShell](#tab/azure-powershell) ++In PowerShell, you can use variables for the role assignment. This example uses the `$storageId` you created in a previous step and creates the `$arpId` variable. ++```azurepowershell-interactive +$arpId = (Get-AzADServicePrincipal -SearchString "Appliance Resource Provider").Id ++New-AzRoleAssignment -ObjectId $arpId ` +-RoleDefinitionName Contributor ` +-Scope $storageId +``` ++# [Azure CLI](#tab/azure-cli) ++In Azure CLI, you need to use the string values to create the role assignment. This example gets string values from the `storageId` variable you created in a previous step and gets the object ID value for the Appliance Resource Provider. The command has placeholders for those values `arpGuid` and `storageId`. Replace the placeholders with the string values and use the quotes as shown. ++```azurecli-interactive +echo $storageId +az ad sp list --display-name "Appliance Resource Provider" --query [].id --output tsv ++az role assignment create --assignee "arpGuid" \ +--role "Contributor" \ +--scope "storageId" +``` ++If you're running CLI commands with Git Bash for Windows, you might get an `InvalidSchema` error because of the `scope` parameter's string. To fix the error, run `export MSYS_NO_PATHCONV=1` and then rerun your command to create the role assignment. ++++The **Appliance Resource Provider** is an Azure Enterprise application (service principal). Go to **Azure Active Directory** > **Enterprise applications** and change the search filter to **All Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider. ### Deploy the managed application definition with an ARM template -Use the following ARM template to deploy your packaged managed application as a new managed application definition in Service Catalog whose definition files are stored and maintained in your own storage account: +Use the following ARM template to deploy your packaged managed application as a new managed application definition in your service catalog. The definition files are stored and maintained in your storage account. ++Open Visual Studio Code, create a file with the name _azuredeploy.json_ and save it. ++Add the following JSON and save the file. ```json { Use the following ARM template to deploy your packaged managed application as a "applicationName": { "type": "string", "metadata": {- "description": "Managed Application name" - } - }, - "storageAccountType": { - "type": "string", - "defaultValue": "Standard_LRS", - "allowedValues": [ - "Standard_LRS", - "Standard_GRS", - "Standard_ZRS", - "Premium_LRS" - ], - "metadata": { - "description": "Storage Account type" + "description": "Managed Application name." } }, "definitionStorageResourceID": { "type": "string", "metadata": {- "description": "Storage account resource ID for where you're storing your definition" + "description": "Storage account's resource ID where you're storing your managed application definition." } },- "_artifactsLocation": { + "packageFileUri": { "type": "string", "metadata": {- "description": "The base URI where artifacts required by this template are located." + "description": "The URI where the .zip package file is located." } } }, Use the following ARM template to deploy your packaged managed application as a "description": "Sample Managed application definition", "displayName": "Sample Managed application definition", "managedApplicationDefinitionName": "[parameters('applicationName')]",- "packageFileUri": "[parameters('_artifactsLocation')]", - "defLocation": "[parameters('definitionStorageResourceID')]", - "managedResourceGroupId": "[concat(subscription().id,'/resourceGroups/', concat(parameters('applicationName'),'_managed'))]", - "applicationDefinitionResourceId": "[resourceId('Microsoft.Solutions/applicationDefinitions',variables('managedApplicationDefinitionName'))]" + "packageFileUri": "[parameters('packageFileUri')]", + "defLocation": "[parameters('definitionStorageResourceID')]" }, "resources": [ { "type": "Microsoft.Solutions/applicationDefinitions",- "apiVersion": "2020-08-21-preview", + "apiVersion": "2021-07-01", "name": "[variables('managedApplicationDefinitionName')]", "location": "[parameters('location')]", "properties": { Use the following ARM template to deploy your packaged managed application as a } ``` -The `applicationDefinitions` properties include `storageAccountId` that contains the storage account ID for your storage account. You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`. +For more information about the ARM template's properties, see [Microsoft.Solutions](/azure/templates/microsoft.solutions/applicationdefinitions). ++### Deploy the definition ++Create a resource group named _byosDefinitionRG_ and deploy the managed application definition to your storage account. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +New-AzResourceGroup -Name byosDefinitionRG -Location eastus ++$storageId ++New-AzResourceGroupDeployment ` + -ResourceGroupName byosDefinitionRG ` + -TemplateFile .\azuredeploy.json +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az group create --name byosDefinitionRG --location eastus ++echo $storageId ++az deployment group create \ + --resource-group byosDefinitionRG \ + --template-file ./azuredeploy.json +``` ++++You'll be prompted for three parameters to deploy the definition. ++| Parameter | Value | +| - | - | +| `applicationName` | Choose a name for your managed application definition. For this example, use _sampleManagedAppDefintion_.| +| `definitionStorageResourceID` | Enter your storage account's resource ID. You created the `storageId` variable with this value in an earlier step. Don't wrap the resource ID with quotes. | +| `packageFileUri` | Enter the URI to your _.zip_ package file. Use the URI for the _.zip_ [package file](#package-the-files) you created in an earlier step. The format is `https://yourStorageAccountName.blob.core.windows.net/appcontainer/app.zip`. | ++### Verify definition files storage ++During deployment, the template's `storageAccountId` property uses your storage account's resource ID and creates a new container with the case-sensitive name `applicationdefinitions`. The files from the _.zip_ package you specified during the deployment are stored in the new container. ++You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage | +Get-AzStorageContainer -Name applicationdefinitions | +Get-AzStorageBlob | Select-Object -Property * +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az storage blob list \ + --container-name applicationdefinitions \ + --account-name definitionstorage \ + --query "[].{container:container, name:name}" +``` ++When you run the Azure CLI command, you might see a warning message similar to the CLI command in [package the files](#package-the-files). ++ > [!NOTE]-> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog. +> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in your service catalog. ## Make sure users can see your definition |
azure-resource-manager | Networking Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md | Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 08/15/2022 Last updated : 08/16/2022 # Move networking resources to new resource group or subscription If you want to move networking resources to a new region, see [Tutorial: Move Az ## Dependent resources > [!NOTE]-> Please note that any resource, including VPN Gateways, associated with Public IP Standard SKU addresses are not currently able to move across subscriptions. +> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address must be disassociated from the public IP address before moving across subscriptions. When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group. |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 11/30/2021 Last updated : 08/15/2022 There are some important steps to do before moving a resource. By verifying thes * [Transfer ownership of an Azure subscription to another account](../../cost-management-billing/manage/billing-subscription-transfer.md) * [How to associate or add an Azure subscription to Azure Active Directory](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md) +1. If you're attempting to move resources to or from a Cloud Solution Provider (CSP) partner, see [Transfer Azure subscriptions between subscribers and CSPs](../../cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md). + 1. The destination subscription must be registered for the resource provider of the resource being moved. If not, you receive an error stating that the **subscription is not registered for a resource type**. You might see this error when moving a resource to a new subscription, but that subscription has never been used with that resource type. For PowerShell, use the following commands to get the registration status: |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Last updated 08/15/2022 This article lists whether an Azure resource type supports the move operation. It also provides information about special conditions to consider when moving a resource. +Before starting your move operation, review the [checklist](./move-resource-group-and-subscription.md#checklist-before-moving-resources) to make sure you have satisfied prerequisites. + > [!IMPORTANT] > In most cases, a child resource can't be moved independently from its parent resource. Child resources have a resource type in the format of `<resource-provider-namespace>/<parent-resource>/<child-resource>`. For example, `Microsoft.ServiceBus/namespaces/queues` is a child resource of `Microsoft.ServiceBus/namespaces`. When you move the parent resource, the child resource is automatically moved with it. If you don't see a child resource in this article, you can assume it is moved with the parent resource. If the parent resource doesn't support move, the child resource can't be moved. Jump to a resource provider namespace: ## Microsoft.SaaS +> [!IMPORTANT] +> Marketplace offerings that are implemented through the Microsoft.Saas resource provider support resource group and subscription moves. These offerings are represented by the `resources` type below. For example, **SendGrid** is implemented through Microsoft.Saas and supports move operations. However, limitations defined in the [move requirements checklist](./move-resource-group-and-subscription.md#checklist-before-moving-resources) may limit the supported move scenarios. For example, you can't move the resources from a Cloud Solution Provider (CSP) partner. + > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-resource-manager | Tag Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md | To get the same data as a file of comma-separated values, download [tag-support. > | managedInstances | Yes | Yes | > | managedInstances / administrators | No | No | > | managedInstances / advancedThreatProtectionSettings | No | No |-> | managedInstances / databases | Yes | Yes | +> | managedInstances / databases | Yes | No | > | managedInstances / databases / advancedThreatProtectionSettings | No | No | > | managedInstances / databases / backupLongTermRetentionPolicies | No | No | > | managedInstances / databases / vulnerabilityAssessments | No | No | |
azure-resource-manager | Template Tutorial Add Outputs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-outputs.md | Title: Tutorial - add outputs to template description: Add outputs to your Azure Resource Manager template (ARM template) to simplify the syntax. Previously updated : 03/27/2020 Last updated : 08/17/2022 -In this tutorial, you learn how to return a value from your Azure Resource Manager template (ARM template). You use outputs when you need a value from a deployed resource. This tutorial takes **7 minutes** to complete. +In this tutorial, you learn how to return a value from your Azure Resource Manager template (ARM template). You use outputs when you need a value for a resource you deploy. This tutorial takes **7 minutes** to complete. ## Prerequisites We recommend that you complete the [tutorial about variables](template-tutorial-add-variables.md), but it's not required. -You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools). +You need to have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools). ## Review template At the end of the previous tutorial, your template had the following JSON: :::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-variable/azuredeploy.json"::: -It deploys a storage account, but it doesn't return any information about the storage account. You might need to capture properties from a new resource so they're available later for reference. +It deploys a storage account, but it doesn't return any information about it. You might need to capture properties from your new resource so they're available later for reference. ## Add outputs -You can use outputs to return values from the template. For example, it might be helpful to get the endpoints for your new storage account. +You can use outputs to return values from the template. It might be helpful, for example, to get the endpoints for your new storage account. The following example highlights the change to your template to add an output value. Copy the whole file and replace your template with its contents. There are some important items to note about the output value you added. The type of returned value is set to `object`, which means it returns a JSON object. -It uses the [reference](template-functions-resource.md#reference) function to get the runtime state of the storage account. To get the runtime state of a resource, you pass in the name or ID of a resource. In this case, you use the same variable you used to create the name of the storage account. +It uses the [reference](template-functions-resource.md#reference) function to get the runtime state of the storage account. To get the runtime state of a resource, pass the name or ID of a resource. In this case, you use the same variable you used to create the name of the storage account. Finally, it returns the `primaryEndpoints` property from the storage account. New-AzResourceGroupDeployment ` # [Azure CLI](#tab/azure-cli) -To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI. +To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI. ```azurecli az deployment group create \ az deployment group create \ -In the output for the deployment command, you'll see an object similar to the following example only if the output is in JSON format: +In the output for the deployment command, you see an object similar to the following example only if the output is in JSON format: ```json { In the output for the deployment command, you'll see an object similar to the fo ``` > [!NOTE]-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging. +> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging. ## Review your work -You've done a lot in the last six tutorials. Let's take a moment to review what you have done. You created a template with parameters that are easy to provide. The template is reusable in different environments because it allows for customization and dynamically creates needed values. It also returns information about the storage account that you could use in your script. +You've done a lot in the last six tutorials. Let's take a moment to review what you've done. You created a template with parameters that are easy to provide. The template is reusable in different environments because it allows for customization and dynamically creates needed values. It also returns information about the storage account that you could use in your script. Now, let's look at the resource group and deployment history. Now, let's look at the resource group and deployment history. If you're moving on to the next tutorial, you don't need to delete the resource group. -If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group. +If you're stopping now, you might want to delete the resource group. -1. From the Azure portal, select **Resource group** from the left menu. -2. Enter the resource group name in the **Filter by name** field. -3. Select the resource group name. +1. From the Azure portal, select **Resource groups** from the left menu. +2. Type the resource group name in the **Filter for any field...** text field. +3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name. 4. Select **Delete resource group** from the top menu. ## Next steps -In this tutorial, you added a return value to the template. In the next tutorial, you'll learn how to export a template and use parts of that exported template in your template. +In this tutorial, you added a return value to the template. In the next tutorial, you learn how to export a template and use parts of that exported template in your template. > [!div class="nextstepaction"] > [Use exported template](template-tutorial-export-template.md) |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
azure-video-indexer | Create Account Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md | Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the regist 1. In the Create an Azure Video Indexer resource section, enter required values (the descriptions follow after the image). > [!div class="mx-imgBorder"]- > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure Video Indexer resource." lightbox="./media/create-account-portal/avi-create-blade.png"::: + > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure Video Indexer resource."::: Here are the definitions: |
azure-vmware | Enable Managed Snat For Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md | With this capability, you: ## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. ## Configure Outbound Internet access using Managed SNAT in the Azure portal |
azure-vmware | Enable Public Ip Nsx Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md | With this capability, you have the following features: ## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. ## Configure a Public IP in the Azure portal 1. Log on to the Azure portal. |
backup | Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md | Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 06/08/2022 Last updated : 08/16/2022 Disk deduplication support is as follows: - Disk deduplication is supported on-premises when you use DPM or MABS to back up Hyper-V VMs that are running Windows. Windows Server performs data deduplication (at the host level) on virtual hard disks (VHDs) that are attached to the VM as backup storage. - Deduplication isn't supported in Azure for any Backup component. When DPM and MABS are deployed in Azure, the storage disks attached to the VM can't be deduplicated. +>[!Note] +>Azure VM backup does not support Azure VM with deduplication. This means Azure Backup does not deduplicate backup data, except in MABS/MARS. + ## Security and encryption support Azure Backup supports encryption for in-transit and at-rest data. |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
bastion | Tutorial Create Host Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md | description: Learn how to deploy Bastion using settings that you specify - Azure Previously updated : 08/03/2022 Last updated : 08/15/2022 |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
cdn | Cdn Sas Storage Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md | To use Azure CDN security token authentication, you must have an **Azure CDN Pre ``` $1&sv=2017-07-29&ss=b&srt=c&sp=r&se=2027-12-19T17:35:58Z&st=2017-12-19T09:35:58Z&spr=https&sig=kquaXsAuCLXomN7R00b8CYM13UpDbAHcsRfGOW3Du1M%3D ```-  -  + :::image type="content" source="./media/cdn-sas-storage-support/cdn-url-rewrite-rule.png" alt-text="Screenshot of CDN URL Rewrite rule - left."::: + :::image type="content" source="./media/cdn-sas-storage-support/cdn-url-rewrite-rule-option-3.png" alt-text="Screenshot of CDN URL Rewrite rule - right."::: 3. If you renew the SAS, ensure that you update the Url Rewrite rule with the new SAS token. |
center-sap-solutions | Install Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md | You also can [upload the components manually](#upload-components-manually) inste Before you can download the software, set up an Azure Storage account for the downloads. -1. [Create an Ubuntu 20.04 VM in Azure](/cli/azure/install-azure-cli-linux?pivots=apt). +1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure. ++1. Create a container within the Azure Storage account named `sapbits`. ++ 1. On the storage account's sidebar menu, select **Containers** under **Data storage**. ++ 1. Select **+ Container**. ++ 1. On the **New container** pane, for **Name**, enter `sapbits`. ++ 1. Select **Create**. + +1. Create an Ubuntu 20.04 VM in Azure 1. Sign in to the VM. Before you can download the software, set up an Azure Storage account for the do 1. [Update the Azure CLI](/cli/azure/update-azure-cli) to version 2.30.0 or higher. -1. Install the following packages: -- - `pip3` version `pip-21.3.1.tar.gz` - - `wheel` version 0.37.1 - - `jq` version 1.6 - - `ansible` version 2.9.27 - - `netaddr` version 0.8.0 - - `zip` - - `netaddr` version 0.8.0 1. Sign in to Azure: Before you can download the software, set up an Azure Storage account for the do az login ``` -1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure. --1. Create a container within the Azure Storage account named `sapbits`. -- 1. On the storage account's sidebar menu, select **Containers** under **Data storage**. -- 1. Select **+ Container**. -- 1. On the **New container** pane, for **Name**, enter `sapbits`. -- 1. Select **Create**. - 1. Download the following shell script for the deployer VM packages. ```azurecli After setting up your Azure Storage account, you can download the SAP installati 1. Sign in to the Ubuntu VM that you created in the [previous section](#set-up-storage-account). +1. Install ansible 2.9.27 on the ubuntu VM ++ ```bash + sudo pip3 install ansible==2.9.27 + ``` + 1. Clone the SAP automation repository from GitHub. ```azurecli git clone https://github.com/Azure/sap-automation.git ``` -1. Generate a shared access signature (SAS) token for the `sapbits` container. -- 1. In the Azure portal, open the Azure Storage account. - - 1. Open the `sapbits` container. -- 1. On the container's sidebar menu, select **Shared access signature** under **Security + networking**. -- 1. On the SAS page, under **Allowed resource types**, select **Container**. -- 1. Configure other settings as necessary. -- 1. Select **Generate SAS and connection string**. -- 1. Copy the **SAS token** value. Make sure to copy the `?` prefix with the token. - 1. Run the Ansible script **playbook_bom_download** with your own information. - For `<username>`, use your SAP username. |
cloud-shell | Cloud Shell Windows Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-windows-users.md | Title: Azure Cloud Shell for Windows users | Microsoft Docs description: Guide for users who are not familiar with Linux systems documentationcenter: ''-+ tags: azure-resource-manager- ++ms.assetid: vm-linux Previously updated : 08/03/2018- Last updated : 08/16/2022+ # PowerShell in Azure Cloud Shell for Windows users PowerShell specific experiences, such as `tab-completing` cmdlet names, paramete Some existing PowerShell aliases have the same names as built-in Linux commands, such as `cat`,`ls`, `sort`, `sleep`, etc. In PowerShell Core 6, aliases that collide with built-in Linux commands have been removed.-Below are the common aliases that have been removed as well as their equivalent commands: +Below are the common aliases that have been removed as well as their equivalent commands: |Removed Alias |Equivalent Command | ||| mkdir (Split-Path $profile.CurrentUserAllHosts) Under `$HOME/.config/PowerShell`, you can create your profile files - `profile.ps1` and/or `Microsoft.PowerShell_profile.ps1`. -## What's new in PowerShell Core 6 +## What's new in PowerShell -For more information about what is new in PowerShell Core 6, reference the [PowerShell docs](/powershell/scripting/whats-new/what-s-new-in-powershell-70) and the [Getting Started with PowerShell Core](https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/) blog post. +For more information about what is new in PowerShell, reference the +[PowerShell What's New](/powershell/scripting/whats-new/overview) and +[Discover PowerShell](/powershell/scripting/discover-powershell). |
cognitive-services | How To Speech Synthesis Viseme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md | Title: How to get facial pose events for lip-sync + Title: Get facial position with viseme description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme. The overall workflow of viseme is depicted in the following flowchart:  -You can request viseme output in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup.md#viseme-element). - ## Viseme ID Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes). The blend shapes JSON string is represented as a 2-dimensional matrix. Each row To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in the Speech SDK. +> [!NOTE] +> To request SVG or blend shapes output, you should use the `mstts:viseme` element in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup.md#viseme-element). + The following snippet shows how to subscribe to the viseme event: ::: zone pivot="programming-language-csharp" |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | The following neural voices are in public preview. | Chinese (Mandarin, Simplified) | `zh-CN-sichuan` | Male | `zh-CN-sichuan-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent | | English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |+| English (United States) | `en-US` | Male | `en-US-AIGenerate1Neural` <sup>New</sup> | General| | English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-RogerNeural` <sup>New</sup> | General| |
cognitive-services | Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md | The following table lists the base URLs for Azure sovereign cloud endpoints: | Azure portal for US Government | `https://portal.azure.us` | | Azure portal China operated by 21 Vianet | `https://portal.azure.cn` | +<!-- markdownlint-disable MD033 --> + ## Translator: sovereign clouds ### [Azure US Government](#tab/us) The following table lists the base URLs for Azure sovereign cloud endpoints: |Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>| | Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>| |Available pricing tiers|<ul><li>Free (F0) and Standard (S0). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|-|Supported Features | <ul><li>Text Translation</li><li>Document Translation</li><li>Custom Translation</li></ul>| +|Supported Features | <ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>| |Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>| <!-- markdownlint-disable MD036 --> https://api.cognitive.microsofttranslator.us/ #### Document Translation custom endpoint -Replace the `<your-custom-domain>` parameter with your [custom domain endpoint](document-translation/get-started-with-document-translation.md#what-is-the-custom-domain-endpoint). - ```http-https://<your-custom-domain>.cognitiveservices.azure.us/ +https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0 ``` #### Custom Translator portal The Azure China cloud is a physical and logical network-isolated instance of clo ||| |Azure portal |<ul><li>[Azure China 21 Vianet Portal](https://portal.azure.cn/)</li></ul>| |Regions <br></br>The region-identifier is a required header when using a multi-service resource. | <ul><li>`chinanorth` </li><li> `chinaeast2`</li></ul>|-|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li></ul>| +|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li></ul>| |Supported Languages|<ul><li>[Translator language support.](https://docs.azure.cn/cognitive-services/translator/language-support)</li></ul>| <!-- markdownlint-disable MD036 --> https://<region-identifier>.api.cognitive.azure.cn/sts/v1.0/issueToken https://api.translator.azure.cn/translate ``` -### Example API translation request +### Example text translation request Translate a single sentence from English to Simplified Chinese. curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en& ] ``` -> [!div class="nextstepaction"] -> [Azure China: Translator Text reference](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference) +#### Document Translation custom endpoint ++#### Document Translation custom endpoint ++```http +https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0 +``` ++### Example batch translation request ++```json +{ + "inputs": [ + { + "source": { + "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D" + }, + "targets": [ + { + "targetUrl": "https://my.blob.core.windows.net/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D", + "language": "zh-Hans" + } + ] + } + ] +} +``` -## Next step +## Next steps > [!div class="nextstepaction"] > [Learn more about Translator](index.yml) |
cognitive-services | Cognitive Services Limited Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md | -Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services. +Our vision is to empower developers and organizations to use AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services. ## What is Limited Access? -Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they have reviewed and agree to the terms of service. Microsoft may require customers to re-verify this information. +Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information. -Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Please review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services. +Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services. ## List of Limited Access services The following services are Limited Access: - [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature - [Azure Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features -Features of these services that are not listed above are available without registration. +Features of these services that aren't listed above are available without registration. ## FAQ about Limited Access -### How do I apply for access? +### How do I register for access? -Please submit an intake form for each Limited Access service you would like to use: +Submit a registration form for each Limited Access service you would like to use: - [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features Please submit an intake form for each Limited Access service you would like to u - [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature - [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features -### How long will the application process take? +### How long will the registration process take? -Review may take 5-10 business days. You will receive an email as soon as your application is reviewed. +Review may take 5-10 business days. You'll receive an email as soon as your registration form is reviewed. ### Who is eligible to use Limited Access services? -Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their application. +Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration. -Please use an email address affiliated with your organization in your application. Applications submitted with personal email addresses will be denied. +Use an email address affiliated with your organization in your registration. Registration submitted with personal email addresses will be denied. -If you are not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program. +If you aren't a managed customer, we invite you to submit a registration using the same forms and we'll reach out to you about any opportunities to join an eligibility program. -### What if I donΓÇÖt know whether IΓÇÖm a managed customer? What if I donΓÇÖt know my Microsoft contact or donΓÇÖt know if my organization has one? +### What is a managed customer? What if I donΓÇÖt know whether IΓÇÖm a managed customer? -We invite you to submit an intake form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access. +Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time. -### What happens if IΓÇÖm an existing customer and I donΓÇÖt apply? +### What happens if IΓÇÖm an existing customer and I donΓÇÖt register? -Existing customers have until June 30, 2023 to submit an intake form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023. +Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without approved registration, you'll be denied access after June 30, 2023. -The intake forms can be found here: +The registration forms can be found here: - [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features The intake forms can be found here: - [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature - [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features -### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to apply to keep using these services? +### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services? -WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you have previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new intake form to continue using these services beyond June 30, 2023. +WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023. -If youΓÇÖre an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit an intake form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The intake forms can be found here: +If you were an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for registration processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here: - [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features -### What if my use case is not on the intake form? +### What if my use case isn't on the registration form? -Limited Access features are only available for the use cases listed on the intake forms. If your desired use case is not listed, please let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings. +Limited Access features are only available for the use cases listed on the registration forms. If your desired use case isn't listed, let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings. ### Where can I use Limited Access services? Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md). -### What happens to my data if my application is denied? +### What happens to my data if my registration is denied? -If you are an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.). +If you are an existing customer and your registration for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.). ## Help and support |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md | |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md | |
cognitive-services | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md | If you want to clean up and remove a Cognitive Services subscription, you can de > [!div class="nextstepaction"] > <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> - ## Next steps * [How to call document summarization](./how-to/document-summarization.md)-* [How to call conversation summarization](./how-to/conversation-summarization.md) +* [How to call conversation summarization](./how-to/conversation-summarization.md) |
cognitive-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | -The following table shows supported client-side capabilities available in Azure Communication Services SDKs: --| Capability | Supported | -| | | -| Send and receive chat messages | ✔️ | -| Use typing indicators | ✔️ | -| Read receipt | ❌ | -| File sharing | ❌ | -| Reply to chat message | ❌ | -| React to chat message | ❌ | -| Audio and video calling | ✔️ | -| Share screen and see shared screen | ✔️ | -| Manage Teams convenient recording | ❌ | -| Manage Teams transcription | ❌ | -| Receive closed captions | ❌ | -| Add and remove meeting participants | ❌ | -| Raise and lower hand | ❌ | -| See raised and lowered hand | ❌ | -| See and set reactions | ❌ | -| Control Teams third-party applications | ❌ | -| Interact with a poll or Q&A | ❌ | -| Set and unset spotlight | ❌ | -| See PowerPoint Live | ❌ | -| See Whiteboard | ❌ | -| Participation in breakout rooms | ❌ | -| Apply background effects | ❌ | -| See together mode video stream | ❌ | --When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting. +The following table shows supported client-side capabilities available in Azure Communication Services SDKs. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md). ++| Category | Capability | Supported | +| | | | +|Chat | Send and receive chat messages | ✔️ | +| | Send and receive Giphy | ❌ | +| | Send messages with high priority | ❌ | +| | Recieve messages with high priority | ✔️ | +| | Send and receive Loop components | ❌ | +| | Send and receive Emojis | ❌ | +| | Send and receive Stickers | ❌ | +| | Send and receive Stickers | ❌ | +| | Send and receive Teams messaging extensions | ❌ | +| | Use typing indicators | ✔️ | +| | Read receipt | ❌ | +| | File sharing | ❌ | +| | Reply to chat message | ❌ | +| | React to chat message | ❌ | +|Calling - core | Audio send and receive | ✔️ | +| | Send and receive video | ✔️ | +| | Share screen and see shared screen | ✔️ | +| | Manage Teams convenient recording | ❌ | +| | Manage Teams transcription | ❌ | +| | Manage breakout rooms | ❌ | +| | Participation in breakout rooms | ❌ | +| | Leave meeting | ✔️ | +| | End meeting | ❌ | +| | Change meeting options | ❌ | +| | Lock meeting | ❌ | +| Calling - participants| See roster | ✔️ | +| | Add and remove meeting participants | ❌ | +| | Dial out to phone number | ❌ | +| | Disable mic or camera of others | ❌ | +| | Make a participant and attendee or presenter | ❌ | +| | Admit or reject participants in the lobby | ❌ | +| Calling - engagement | Raise and lower hand | ❌ | +| | See raised and lowered hand | ❌ | +| | See and set reactions | ❌ | +| Calling - video streams | Send and receive video | ✔️ | +| | See together mode video stream | ❌ | +| | See Large gallery view | ❌ | +| | See Video stream from Teams media bot | ❌ | +| | See adjusted content from Camera | ❌ | +| | Set and unset spotlight | ❌ | +| | Apply background effects | ❌ | +| Calling - integrations | Control Teams third-party applications | ❌ | +| | See PowerPoint Live stream | ❌ | +| | See Whiteboard stream | ❌ | +| | Interact with a poll | ❌ | +| | Interact with a Q&A | ❌ | +| | Interact with a OneNote | ❌ | +| | Manage SpeakerCoach | ❌ | +| Accessibility | Receive closed captions | ❌ | +| | Communication access real-time translation (CART) | ❌ | +| | Language interpretation | ❌ | ++When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting. ## Server capabilities The following table shows supported Teams capabilities: | | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |-+| [Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ | ## Next steps The following table shows supported Teams capabilities: - [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).- |
communication-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md | You can create an identity and access token for Teams external users on Azure po With a valid identity, access token, and Teams meeting URL, you can use [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/story/composites-call-with-chat-jointeamsmeeting--join-teams-meeting) to join Teams meeting without any code. +>[!VIDEO https://www.youtube.com/embed/chMHVHLFcao] + ### Single-click deployment The [Azure Communication Services Calling Hero Sample](../../../samples/calling-hero-sample.md) demonstrates how developers can use Azure Communication Services Calling Web SDK to join a Teams meeting from a web application as a Teams external user. You can experiment with the capability with single-click deployment to Azure. |
communication-services | Teams Administration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md | -# Teams administrator controls +# Teams controls +Teams administrators control organization-wide policies and manage and assign user policies. Teams meeting policies are tied to the organizer of the Teams meeting. Teams meetings also have options to customize specific Teams meetings further. ++## Teams policies Teams administrators have the following policies to control the experience for Teams external users in Teams meetings. |Setting name|Policy scope|Description| Supported | Teams administrators have the following policies to control the experience for T Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users. +## Teams meeting options ++Teams meeting organizers can also configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for external users: ++|Option name|Description| Supported | +| | | | +| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ | +| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | +| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ | +| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ | +| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ | +|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌| +|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local audio |✔️| +|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local video |✔️| +|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| +|Allow meeting chat|If enabled, external users can use the chat associated with the Teams meeting.|✔️| +|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, external users can use reactions in the Teams meeting |❌| +|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| +|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌| ++ ## Next steps - [Authenticate as Teams external user](../../../quickstarts/access-tokens.md) |
communication-services | Teams Client Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-client-experience.md | + + ## Next steps - [Authenticate as Teams external user](../../../quickstarts/access-tokens.md) |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | Title: Azure Communication Services Teams identity overview description: Provides an overview of the support for Teams identity in Azure Communication Services Calling SDK. -+ Key features of the Calling SDK: - **Addressing** - Azure Communication Services is using [Azure Active Directory user identifier](/powershell/module/azuread/get-azureaduser) to address communication endpoints. Clients use Azure Active Directory identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provide clients visibility into who is connected to a call (the roster). And are also used in [Microsoft Graph API](/graph/api/user-get). - **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire. - **Device Management and Media** - The Calling SDK provides facilities for binding to audio and video devices, encodes content for efficient transmission over the communications data plane, and renders content to output devices and views that you specify. APIs are also provided for screen and application sharing.-- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publicly switched telephony system, [using phone numbers you acquire in the Teams Admin Portal](/microsoftteams/pstn-connectivity).+- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publicly switched telephony system [using phone numbers you acquire in the Teams Admin Portal](/microsoftteams/pstn-connectivity). - **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video data plane. -- **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call. +- **Notifications** - The Calling SDK provides APIs that allow clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call. ## Detailed Azure Communication Services capabilities -The following list presents the set of features, which are currently available in the Azure Communication Services Calling SDK for JavaScript. +The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript. | Group of features | Capability | JavaScript | | -- | - | - | The following list presents the set of features, which are currently available i | | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | | | Show if a participant is muted | ✔️ | | | Show the reason why a participant left a call | ✔️ |-| | Admit participant in the Lobby into the Teams meeting | ❌ | +| | Admit participant in the lobby into the Teams meeting | ❌ | | Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ | The following list presents the set of features, which are currently available i | | Place a group call with PSTN participants | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ |-| | Suppport for early media | ❌ | +| | Support for early media | ❌ | | General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ | The following list presents the set of Teams capabilities, which are currently a | | Transfer a call to a call | ✔️ | | | Transfer a call to Voicemail | ❌ | | | Merge ongoing calls | ❌ |-| | Place a call on behalf of user | ❌ | +| | Place a call on behalf of the user | ❌ | | | Start call recording | ❌ | | | Start call transcription | ❌ | | | Start live captions | ❌ | | | Receive information of call being recorded | ✔️ |-| PSTN | Make an Emergency call | ❌ | +| PSTN | Make an Emergency call | ✔️ | | | Place a call honors location-based routing | ❌ | | | Support for survivable branch appliance | ❌ | | Phone system | Receive a call from Teams auto attendant | ✔️ | The following list presents the set of Teams capabilities, which are currently a | | Transfer a call from Teams call queue (only conference mode) | ✔️ | | Compliance | Place a call honors information barriers | ✔️ | | | Support for compliance recording | ✔️ |+| Meeting | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ | +++## Teams meeting options ++Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users: ++|Option name|Description| Supported | +| | | | +| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ | +| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | +| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ | +| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ | +| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ | +|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌| +|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️| +|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️| +|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| +|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️| +|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Sevices don't support reactions. |❌| +|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| +|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌| + ## Next steps |
communication-services | Teams Client Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/teams-client-experience.md | + + Title: Teams client experience for Teams user ++description: Teams client experience of Azure Communication Services support for Teams users ++ Last updated : 7/9/2022++++++# Experience for users in Teams client interacting with Teams users +Teams users calling users in the same organization or joining Teams meetings organized in the same organization will be represented in Teams client as any other Teams user. Teams users calling users in trusted organizations or joining Teams meetings organized in trusted organizations will be represented in Teams clients as Teams users from different organizations. Teams users from the other organizations will be marked as "external" in the participant's lists as Teams clients. As Teams users from a trusted organization, their capabilities in the Teams meetings will be limited regardless of the assigned Teams meeting role. ++## Joining meetings within the organization +The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from the same organization using Azure Communication Services SDK who joined Teams meeting. + ++## Joining meetings outside of the organization +The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from a different organization using Azure Communication Services SDK who joined Teams meeting. + ++## Next steps ++> [!div class="nextstepaction"] +> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md) |
connectors | Connectors Create Api Crmonline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-crmonline.md | Title: Connect to Dynamics 365 -description: Create and manage Dynamics 365 in workflows using Azure Logic Apps. + Title: Connect to Dynamics 365 (Deprecated) +description: Connect to your Dynamics 365 database from workflows in Azure Logic Apps. ms.suite: integration Last updated 08/05/2022 tags: connectors -# Connect to Dynamics 365 from workflows in Azure Logic Apps +# Connect to Dynamics 365 from workflows in Azure Logic Apps (Deprecated) > [!IMPORTANT] > The Dynamics 365 connector is officially deprecated and is no longer available. Instead, use the |
connectors | Connectors Create Api Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md | Title: Connect to FTP servers -description: Connect to an FTP server from workflows in Azure Logic Apps. +description: Connect to your FTP server from workflows in Azure Logic Apps. ms.suite: integration Previously updated : 07/24/2022 Last updated : 08/15/2022 tags: connectors # Connect to an FTP server from workflows in Azure Logic Apps -This article shows how to access your FTP server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server. +This article shows how to access your File Transfer Protocol (FTP) server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server. For example, your workflow can start with an FTP trigger that monitors and responds to events on your FTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run FTP actions that create, send, receive, and manage files through your FTP server account using the following specific tasks: The FTP connector has different versions, based on [logic app type and host envi By default, FTP actions can read or write files that are *200 MB or smaller*. Currently, the FTP built-in connector doesn't support chunking. - * Managed connector for Consumption and Standard workflows + * Managed or Azure-hosted connector for Consumption and Standard workflows By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking. -* FTP managed connector triggers might experience missing, incomplete, or delayed results when the "last modified" timestamp is preserved. On the other hand, the FTP *built-in* connector trigger in Standard logic app workflows doesn't have this limitation. For more information, review the FTP connector's [Limitations](/connectors/ftp/#limitations) section. +* Triggers for the FTP managed or Azure-hosted connector might experience missing, incomplete, or delayed results when the "last modified" timestamp is preserved. On the other hand, the FTP *built-in* connector trigger in Standard logic app workflows doesn't have this limitation. For more information, review the FTP connector's [Limitations](/connectors/ftp/#limitations) section. ++* The FTP managed or Azure-hosted connector can create a limited number of connections to the FTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the FTP built-in connector instead. ## Prerequisites |
connectors | Connectors Create Api Sftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sftp.md | Title: Connect to SFTP account (Deprecated) -description: Automate tasks and processes that monitor, create, manage, send, and receive files for an SFTP server using Azure Logic Apps. + Title: Connect to SFTP (Deprecated) +description: Connect to an SFTP server from workflows in Azure Logic Apps. ms.suite: integration tags: connectors -# Monitor, create, and manage SFTP files in Azure Logic Apps +# Connect to SFTP from workflows in Azure Logic Apps (Deprecated) > [!IMPORTANT] > Please use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md) as the SFTP connector is deprecated. You can no longer select SFTP To automate tasks that monitor, create, send, and receive files on a [Secure Fil You can use triggers that monitor events on your SFTP server and make output available to other actions. You can use actions that perform various tasks on your SFTP server. You can also have other actions in your logic app use the output from SFTP actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) -## Limits +## Limitations The SFTP connector handles only files that are *50 MB or smaller* and doesn't support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). For larger files, use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md). For differences between the SFTP connector and the SFTP-SSH connector, review [Compare SFTP-SSH versus SFTP](../connectors/connectors-sftp-ssh.md#comparison) in the SFTP-SSH article. ++ * The SFTP-SSH managed or Azure-hosted connector for Consumption and Standard workflows handles only files that are *50 MB or smaller* and doesn't support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). For larger files, use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md). For differences between the SFTP connector and the SFTP-SSH connector, review [Compare SFTP-SSH versus SFTP](../connectors/connectors-sftp-ssh.md#comparison) in the SFTP-SSH article. ++ By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking. ++* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead. + ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). |
connectors | Connectors Create Api Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-twilio.md | - Title: Connect to Twilio with Azure Logic Apps -description: Automate tasks and workflows that manage global SMS, MMS, and IP messages through your Twilio account using Azure Logic Apps. --- Previously updated : 08/25/2018-tags: connectors ---# Connect to Twilio from Azure Logic Apps --With Azure Logic Apps and the Twilio connector, -you can create automated tasks and workflows -that get, send, and list messages in Twilio, -which include global SMS, MMS, and IP messages. -You can use these actions to perform tasks with -your Twilio account. You can also have other actions -use the output from Twilio actions. For example, -when a new message arrives, you can send the message -content with the Slack connector. If you're new to logic apps, -review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md) --## Prerequisites --* An Azure account and subscription. If you don't have an Azure subscription, -[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). --* From [Twilio](https://www.twilio.com/): -- * Your Twilio account ID and - [authentication token](https://support.twilio.com/hc/en-us/articles/223136027-Auth-Tokens-and-How-to-Change-Them), - which you can find on your Twilio dashboard -- Your credentials authorize your logic app to create a - connection and access your Twilio account from your logic app. - If you're using a Twilio trial account, - you can send SMS only to *verified* phone numbers. -- * A verified Twilio phone number that can send SMS -- * A verified Twilio phone number that can receive SMS --* Basic knowledge about -[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md) --* The logic app where you want to access your Twilio account. -To use a Twilio action, start your logic app with another trigger, -for example, the **Recurrence** trigger. --## Connect to Twilio ---1. Sign in to the [Azure portal](https://portal.azure.com), -and open your logic app in Logic App Designer, if not open already. --1. Choose a path: -- * Under the last step where you want to add an action, - choose **New step**. -- -or- -- * Between the steps where you want to add an action, - move your pointer over the arrow between steps. - Choose the plus sign (**+**) that appears, - and then select **Add an action**. - - In the search box, enter "twilio" as your filter. - Under the actions list, select the action you want. --1. Provide the necessary details for your connection, -and then choose **Create**: -- * The name to use for your connection - * Your Twilio account ID - * Your Twilio access (authentication) token --1. Provide the necessary details for your selected action -and continue building your logic app's workflow. --## Connector reference --For technical details about triggers, actions, and limits, which are -described by the connector's OpenAPI (formerly Swagger) description, -review the connector's [reference page](/connectors/twilio/). --## Get support --* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html). -* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish). --## Next steps --* Learn about other [Logic Apps connectors](../connectors/apis-list.md) |
connectors | Connectors Schema Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-schema-migration.md | - Title: Migrate apps to latest schema -description: How to migrate logic app workflow JSON definitions to the most recent Workflow Definition Language schema version --- Previously updated : 08/25/2018---# Migrate logic apps to latest schema version --To move your existing logic apps to the newest schema, -follow these steps: --1. In the [Azure portal](https://portal.azure.com), -open your logic app in the Logic App Designer. --2. On your logic app's menu, choose **Overview**. -On the toolbar, choose **Update Schema**. -- > [!NOTE] - > When you choose **Update Schema**, Azure Logic Apps - > automatically runs the migration steps and provides - > the code output for you. You can use this output for - > updating your logic app definition. However, make - > sure you follow best practices as described in the - > following **Best practices** section. --  -- The Update Schema page appears and shows - a link to a document that describes the - improvements in the new schema. --## Best practices --Here are some best practices for migrating your -logic apps to the latest schema version: --* Copy the migrated script to a new logic app. -Don't overwrite the old version until you complete -your testing and confirm that your migrated app works as expected. --* Test your logic app **before** putting in production. --* After you finish migration, start updating your logic -apps to use the [managed APIs](../connectors/apis-list.md) -where possible. For example, start using Dropbox v2 -everywhere that you use DropBox v1. --## Next steps --* Learn how to [manually migrate your Logic apps](../logic-apps/logic-apps-schema-2016-04-01.md) - |
connectors | Connectors Sftp Ssh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md | Title: Connect to SFTP server with SSH -description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps. + Title: Connect to SFTP using SSH from workflows +description: Connect to your SFTP file server over SSH from workflows in Azure Logic Apps. ms.suite: integration Previously updated : 05/06/2022 Last updated : 08/16/2022 tags: connectors -# Create and manage SFTP files using SSH and Azure Logic Apps +# Connect to an SFTP file server using SSH from workflows in Azure Logic Apps To automate tasks that create and manage files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can create automated integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream. In your workflow, you can use a trigger that monitors events on your SFTP server For differences between the SFTP-SSH connector and the SFTP connector, review the [Compare SFTP-SSH versus SFTP](#comparison) section later in this topic. -## Limits +## Limitations * The SFTP-SSH connector currently doesn't support these SFTP servers: For differences between the SFTP-SSH connector and the SFTP connector, review th 1. Follow the trigger with the SFTP-SSH **Get file content** action. This action reads the complete file and implicitly uses message chunking. +* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead. + <a name="comparison"></a> ## Compare SFTP-SSH versus SFTP |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
cosmos-db | Hierarchical Partition Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md | In a real world scenario, some tenants can grow large with thousands of users, w Using a synthetic partition key that combines **TenantId** and **UserId** adds complexity to the application. Additionally, the synthetic partition key queries for a tenant will still be cross-partition, unless all users are known and specified in advance. -With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data on the will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions. +With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions. Queries that specify either the **TenantId**, or both **TenantId** and **UserId** will be efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1000 physical partitions, but a particular **TenantId** was only on five of them, the query would only be routed to the much smaller number of relevant physical partitions. |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
cost-management-billing | Manage Billing Across Tenants | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-across-tenants.md | tags: billing Previously updated : 08/04/2022 Last updated : 08/15/2022 If the Provisioning access setting is turned on, a unique link is created for yo Before assigning roles, make sure you [add a tenant as an associated billing tenant and enable billing management access setting](#add-an-associated-billing-tenant). +> [!IMPORTANT] +> Any user with a role in the billing account can see all users from all tenants who have access to that billing account. For example, Contoso.com is the primary billing tenant. A billing account owner adds Fabrikam.com as an associated billing tenant. Then, the billing account owner adds User1 as a billing account owner. As a result, User1 can see all users who have access to the billing account on both Contoso.com and Fabrikam.com. + ### To assign roles and send an email invitation 1. Sign in to the [Azure portal](https://portal.azure.com). |
cost-management-billing | Calculate Ea Reservations Savings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/calculate-ea-reservations-savings.md | + + Title: Calculate EA reservations cost savings ++description: Learn how Enterprise Agreement users manually calculate their reservations savings. +++++ Last updated : 08/15/2022++++# Calculate EA reservations cost savings ++This article helps Enterprise Agreement users manually calculate their reservations savings. In this article, you download your amortized usage and charges file, prepare an Excel worksheet, and then do some calculations to determine your savings. There are several steps involved and we'll walk you through the process. ++> [!NOTE] +> The prices shown in this article are for example purposes only. ++Although the example process shown in this article uses Excel, you can use the spreadsheet application of your choice. ++This article is specific to EA users. Microsoft Customer Agreement (MCA) users can use similar steps to calculate their reservation savings through invoices. However, the MCA amortized usage file doesn't contain UnitPrice (on-demand pricing) for reservations. Other resources in the file do. For more information, see [Download usage for your Microsoft Customer Agreement](understand-reserved-instance-usage-ea.md#download-usage-for-your-microsoft-customer-agreement). ++## Required permissions ++To view and download usage data as an EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled. ++## Download all usage amortized charges ++1. Sign in to the [Azure portal](https://portal.azure.com/). +2. Search for _Cost Management + Billing_. + :::image type="content" source="./media/calculate-ea-reservations-savings/search-cost-management.png" alt-text="Screenshot showing search for cost management." lightbox="./media/calculate-ea-reservations-savings/search-cost-management.png" ::: +3. If you have access to multiple billing accounts, select the billing scope for your EA billing account. +4. Select **Usage + charges**. +5. For the month you want to download, select **Download**. + :::image type="content" source="./media/calculate-ea-reservations-savings/download-usage-ea.png" alt-text="Screenshot showing Usage + charges download." lightbox="./media/calculate-ea-reservations-savings/download-usage-ea.png" ::: +6. On the Download Usage + Charges page, under Usage Details, select **Amortized charges (usage and purchases)**. + :::image type="content" source="./media/calculate-ea-reservations-savings/select-usage-detail-charge-type-small.png" alt-text="Screenshot showing the Download usage + charges window." lightbox="./media/calculate-ea-reservations-savings/select-usage-detail-charge-type.png" ::: +7. Select **Prepare document**. +8. It could take a while for Azure to prepare your download, depending on your monthly usage. When it's ready for download, select **Download csv**. ++## Prepare data and calculate savings ++Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Then you calculate your savings. ++1. Open the amortized cost file in Excel and save it as an Excel workbook. +2. The data resembles the following example. + :::image type="content" source="./media/calculate-ea-reservations-savings/unformatted-data.png" alt-text="Example screenshot of the unformatted amortized usage file." lightbox="./media/calculate-ea-reservations-savings/unformatted-data.png" ::: +3. In the Home ribbon, select **Format as Table**. +4. In the Create Table window, select **My table has headers**. +5. In the ReservationName column, set a filter to clear **Blanks**. + :::image type="content" source="./media/calculate-ea-reservations-savings/reservation-name-clear-blanks-small.png" alt-text="Screenshot showing clear Blanks data." lightbox="./media/calculate-ea-reservations-savings/reservation-name-clear-blanks.png" ::: +6. Find the ChargeType column and then to the right of the column name, select the sort and filter symbol (the down arrow). +7. For the **ChargeType** column, set a filter on it to select only **Usage**. Clear any other selections. + :::image type="content" source="./media/calculate-ea-reservations-savings/charge-type-selection-small.png" alt-text="Screenshot showing ChargeType selection." lightbox="./media/calculate-ea-reservations-savings/charge-type-selection.png" ::: +8. To the right of **UnitPrice** , insert add a column and label it with a title like **TotalUsedSavings**. +9. In the first cell under TotalUsedSavings, create a formula that calculates (_UnitPrice ΓÇô EffectivePrice) \* Quantity_. + :::image type="content" source="./media/calculate-ea-reservations-savings/total-used-savings-formula.png" alt-text="Screenshot showing the TotalUsedSavings formula." lightbox="./media/calculate-ea-reservations-savings/total-used-savings-formula.png" ::: +10. Copy the formula to all the other empty TotalUsedSavings cells. +11. At the bottom of the TotalUsedSavings column, sum the column's values. + :::image type="content" source="./media/calculate-ea-reservations-savings/total-used-savings-summed.png" alt-text="Screenshot showing the summed values." lightbox="./media/calculate-ea-reservations-savings/total-used-savings-summed.png" ::: +12. Somewhere under your data, create a cell named _TotalUsedSavingsValue_. Next to it, copy the TotalUsed cell and paste it as **Values**. This step is important because the next step will change the applied filter and affect the summed total. + :::image type="content" source="./media/calculate-ea-reservations-savings/paste-value-used.png" alt-text="Screenshot showing pasting the TotalUsedSavings cell as Values." lightbox="./media/calculate-ea-reservations-savings/paste-value-used.png" ::: +13. For the **ChargeType** column, set a filter on it to select only **UnusedReservation**. Clear any other selections. +14. To the right of the TotalUsedSavings column, insert a column and label it with a title like **TotalUnused**. +15. In the first cell under TotalUnused, create a formula that calculates _EffectivePrice \* Quantity_. + :::image type="content" source="./media/calculate-ea-reservations-savings/total-unused-formula.png" alt-text="Screenshot showing the TotalUnused formula." lightbox="./media/calculate-ea-reservations-savings/total-unused-formula.png" ::: +16. At the bottom of the TotalUnused column, sum the column's values. +17. Somewhere under your data, create a cell named _TotalUnusedValue_. Next to it, copy the TotalUnused cell and paste it as **Values**. +18. Under the TotalUsedSavingsValue and TotalUnusedValue cells, create a cell named _ReservationSavings_. Next to it, subtract TotalUnusedValue from TotalUsedSavingsValue. The calculation result is your reservation savings. + :::image type="content" source="./media/calculate-ea-reservations-savings/reservation-savings.png" alt-text="Screenshot showing the ReservationSavings calculation and final savings." lightbox="./media/calculate-ea-reservations-savings/reservation-savings.png" ::: ++If you see a negative savings value, then you likely have many unused reservations. You should review your reservation usage to maximize them. For more information, see [Optimize reservation use](manage-reserved-vm-instance.md#optimize-reservation-use). ++## Other ways to get data and see savings ++Using the preceding steps, you can repeat the process for any number of months. Doing so allows you to see your savings over a longer period. ++Instead of manually calculating your savings, you can see the same savings by viewing the RI savings report in the [Cost Management Power BI App for Enterprise Agreements](../costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md). The Power BI app automatically connects to your Azure data and performs the savings calculations automatically. The report shows savings for the period you have set, so it can span multiple months. ++Instead of downloading usage files, one per month, you can get all your usage data for a specific date range using exports from Cost Management and output the data to Azure Storage. Doing so allows you to see your savings over a longer period. For more information about creating an export, see [Create and manage exported data](../costs/tutorial-export-acm-data.md). ++## Next steps ++- If you have any unused reservations, read [Optimize reservation use](manage-reserved-vm-instance.md#optimize-reservation-use). +- Learn more about creating an export at [Create and manage exported data](../costs/tutorial-export-acm-data.md). +- Read about the RI savings report in the [Cost Management Power BI App for Enterprise Agreements](../costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md). |
data-factory | Connector Azure Data Lake Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md | The following properties are supported for Data Lake Storage Gen2 under `storeSe | | | -- | | type | The type property under `storeSettings` must be set to **AzureBlobFSWriteSettings**. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |-| blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most 4.95-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No | +| blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No | | maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No | |
data-factory | Connector Troubleshoot Azure Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-cosmos-db.md | Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos - **Recommendation**: To check the error details, see [Azure Cosmos DB help document](../cosmos-db/troubleshoot-dot-net-sdk.md). For further help, contact the Azure Cosmos DB team. +## Error code: CosmosDbSqlApiPartitionKeyExceedStorage ++- **Message**: `The size of data each logical partition can store is limited, current partitioning design and workload failed to store more than the allowed amount of data for a given partition key value.` ++- **Cause**: The data size of each logical partition is limited, and the partition key reached the maximum size of your logical partition. ++- **Recommendation**: Check your Azure Cosmos DB partition design. For more information, see [Logical partitions](../cosmos-db/partitioning-overview.md#logical-partitions). + ## Next steps For more troubleshooting help, try these resources: |
data-factory | Continuous Integration Delivery Automate Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md | Deployment can fail if you try to update active triggers. To update active trigg ```powershell $triggersADF = Get-AzDataFactoryV2Trigger -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName-+ $triggersADF | ForEach-Object { Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.name -Force } ``` You can complete similar steps (with the `Start-AzDataFactoryV2Trigger` function The data factory team has provided a [sample pre- and post-deployment script](continuous-integration-delivery-sample-script.md). +> [!NOTE] +> Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD. ++>[!WARNING] +>Make sure to use **PowerShell Core** in ADO task to run the script + ## Next steps - [Continuous integration and delivery overview](continuous-integration-delivery.md) |
data-factory | Continuous Integration Delivery Improvements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md | npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxx - `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. - `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template. +If you would like to stop/ start only the updated triggers, instead use the below command (currently this capability is in preview and the functionality will be transparently merged into the above command during GA): +```dos +npm run build-preview export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput +``` +- `RootFolder` is a mandatory field that represents where the Data Factory resources are located. +- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. +- `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template. > [!NOTE] > The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline. ++ ### Validate Run `npm run build validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example: Follow these steps to get started: ```json { "scripts":{- "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index" + "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index", + "build-preview":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index --preview" }, "dependencies":{- "@microsoft/azure-data-factory-utilities":"^0.1.5" + "@microsoft/azure-data-factory-utilities":"^1.0.0" } } ``` Follow these steps to get started: command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<Your-ResourceGroup-Name>/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"'+ #For using preview that allows you to only stop/ start triggers that are modified, please comment out the above line and uncomment the below line. Make sure the package.json contains the build-preview command. + #customCommand: 'run build-preview export $(Build.Repository.LocalPath) /subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourceGroups/GartnerMQ2021/providers/Microsoft.DataFactory/factories/Dev-GartnerMQ2021-DataFactory "ArmTemplate"' displayName: 'Validate and Generate ARM template' # Publish the artifact to be used as a source for a release pipeline. |
data-factory | Continuous Integration Delivery Sample Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md | The following sample demonstrates how to use a pre- and post-deployment script w Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps). >[!WARNING]->If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands. -> +>Make sure to use **PowerShell Core** in ADO task to run the script ## Pre- and post-deployment script The sample scripts to stop/ start triggers and update global parameters during release process (CICD) are located in the [Azure Data Factory Official GitHub page](https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/ContinuousIntegrationAndDelivery). +> [!NOTE] +> Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD. + ## Script execution and parameters When running a pre-deployment script, you will need to specify a variation of th When running a post-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field. `-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true`- + > [!NOTE] > The `-deleteDeployment` flag is used to specify the deletion of the ADF deployment entry from the deployment history in ARM. |
data-factory | Continuous Integration Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md | If you're using Git integration with your data factory and have a CI/CD pipeline - **Git integration**. Configure only your development data factory with Git integration. Changes to test and production are deployed via CI/CD and don't need Git integration. -- **Pre- and post-deployment script**. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see [Update active triggers](continuous-integration-delivery-automate-azure-pipelines.md#updating-active-triggers). The data factory team has [provided a script](continuous-integration-delivery-sample-script.md) to use located at the bottom of this page.+- **Pre- and post-deployment script**. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see [Update active triggers](continuous-integration-delivery-automate-azure-pipelines.md#updating-active-triggers). The data factory team has [provided a script](continuous-integration-delivery-sample-script.md) to use located at the bottom of this page. ++ > [!NOTE] + > Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD. ++ >[!WARNING] + >Make sure to use **PowerShell Core** in ADO task to run the script. ++ >[!WARNING] + >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands. - **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type. |
data-factory | Copy Data Tool Metadata Driven | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md | This pipeline will copy objects from one group. The objects belonging to this gr | UpdateWatermarkColumnValue | StoreProcedure | Write back the new watermark value to control table to be used next time. | ### Known limitations-- Copy data tool does not support metadata driven ingestion for incrementally copying new files only currently. But you can bring your own parameterized pipelines to achieve that. - IR name, database type, file format type cannot be parameterized in ADF. For example, if you want to ingest data from both Oracle Server and SQL Server, you will need two different parameterized pipelines. But the single control table can be shared by two sets of pipelines. - OPENJSON is used in generated SQL scripts by copy data tool. If you are using SQL Server to host control table, it must be SQL Server 2016 (13.x) and later in order to support OPENJSON function. |
data-factory | Industry Sap Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md | |
data-factory | Industry Sap Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md | |
data-factory | Industry Sap Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md | |
data-factory | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md | |
data-factory | Iterative Development Debugging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md | Title: Iterative development and debugging description: Learn how to develop and debug Data Factory and Synapse Analytics pipelines iteratively with the service UI. Previously updated : 09/09/2021 Last updated : 08/12/2022 |
data-factory | Join Azure Ssis Integration Runtime Virtual Network Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md | description: Learn how to join Azure-SSIS integration runtime to a virtual netwo Previously updated : 02/15/2022 Last updated : 08/11/2022 |
data-factory | Join Azure Ssis Integration Runtime Virtual Network Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md | description: Learn how to join Azure-SSIS integration runtime to a virtual netwo Previously updated : 02/15/2022 Last updated : 08/12/2022 Use Azure portal to configure a classic virtual network before you try to join y :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/access-control-add.png" alt-text=""Access control" and "Add" buttons"::: - 1. Select **Add role assignment**. + 1. Select **Add**, and then **Add role assignment** from the dropdown that appears. - 1. On the **Add role assignment** page, for **Role**, select **Classic Virtual Machine Contributor**. In the **Select** box, paste **ddbf3205-c6bd-46ae-8127-60eb93363864**, and then select **MicrosoftAzureBatch** from the list of search results. + 1. On the **Add role assignment** page, enter **Microsoft Azure Batch** in the search box, select the role, and select **Next**. - :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/azure-batch-to-vm-contributor.png" alt-text="Search results on the "Add role assignment" page"::: + :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/add-virtual-machine-contributor-role.png" alt-text="Sreenshot showing search results for the "Virtual Machine Contributor" role."::: - 1. Select **Save** to save the settings and close the page. + 1. On the **Members** page, under **Members** select **+ Select members**. Then on the **Select Members** pane, search for **Microsoft Azure Batch**, and select it from the list to add it, and click **Select**. - :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/save-access-settings.png" alt-text="Save access settings"::: + :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/add-microsoft-azure-batch-user-to-role-assignment.png" alt-text="Screenshot showing the Microsoft Azure Batch service principal."::: - 1. Confirm that you see **MicrosoftAzureBatch** in the list of contributors. + 1. On the **Role Assignments** page, Search for **Microsoft Azure Batch** if necessary and Confirm that you see it in the list in the **Virtual Machine Contributors** role. :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/azure-batch-in-list.png" alt-text="Confirm Azure Batch access"::: -1. Make sure that *Microsoft.Batch* is a registered resource provider in Azure subscription that has the virtual network for your Azure-SSIS IR to join. For detailed instructions, see the [Register Azure Batch as a resource provider](azure-ssis-integration-runtime-virtual-network-configuration.md#registerbatch) section. +1. Make sure that *Microsoft.Batch* is a registered resource provider in the Azure subscription that has the virtual network for your Azure-SSIS IR to join. For detailed instructions, see the [Register Azure Batch as a resource provider](azure-ssis-integration-runtime-virtual-network-configuration.md#registerbatch) section. ## Join Azure-SSIS IR to the virtual network After you've configured an Azure Resource Manager/classic virtual network, you c 1. Start Microsoft Edge or Google Chrome. Currently, only these web browsers support ADF UI. -1. In [Azure portal](https://portal.azure.com), on the left-hand-side menu, select **Data factories**. If you don't see **Data factories** on the menu, select **More services**, and then in the **INTELLIGENCE + ANALYTICS** section, select **Data factories**. +1. In the [Azure portal](https://portal.azure.com), under the **Azure Services** section, select **More Services** to see a list of all Azure services. In the **Filter services** search box, type **Data Factories**, and then choose **Data Factories** in the list of services that appear. ++ :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/portal-find-data-factories.png" alt-text="Screenshot of the All Services page on the Azure portal filtered for Data Factories."::: :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factories-list.png" alt-text="List of data factories"::: -1. Select your ADF with Azure-SSIS IR in the list. You see the home page for your ADF. Select the **Author & Monitor** tile. You see ADF UI on a separate tab. +1. Select your data factory with the Azure-SSIS IR in the list. You see the home page for your data factory. Select the **Open Azure Data Factory Studio** tile. Azure Data Factory Studio will open on a separate tab. :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factory-home-page.png" alt-text="Data factory home page"::: -1. In ADF UI, switch to the **Edit** tab, select **Connections**, and switch to the **Integration Runtimes** tab. -- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtimes-tab.png" alt-text=""Integration runtimes" tab"::: --1. If your Azure-SSIS IR is running, in the **Integration Runtimes** list, in the **Actions** column, select the **Stop** button for your Azure-SSIS IR. You can't edit your Azure-SSIS IR until you stop it. +1. In Azure Data Factory Studio, select the **Manage** tab on the far left, and then switch to the **Integration Runtimes** tab. If your Azure-SSIS IR is running, hover over it in the list to find and select the **Stop** button, as shown below. You can't edit your Azure-SSIS IR until you stop it. :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/stop-ir-button.png" alt-text="Stop the IR"::: -1. In the **Integration Runtimes** list, in the **Actions** column, select the **Edit** button for your Azure-SSIS IR. -- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtime-edit.png" alt-text="Edit the integration runtime"::: +1. After your Azure-SSIS IR is stopped, select it in the **Integration Runtimes** list to edit it. -1. On the **Integration runtime setup** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Next** button. +1. On the **Edit integration runtime** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Continue** button. 1. On the **Advanced settings** page, complete the following steps. |
data-factory | Join Azure Ssis Integration Runtime Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md | description: Learn how to join Azure-SSIS integration runtime to a virtual netwo Previously updated : 02/15/2022 Last updated : 08/12/2022 When joining your Azure-SSIS IR to a virtual network, remember these important p - If a classic virtual network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure a [classic-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md) connection. -- If an Azure Resource Manager virtual network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can first create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure an [Azure Resource Manager-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection. +- If an Azure Resource Manager network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can first create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure an [Azure Resource Manager-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection. -## Hosting SSISDB in Azure SQL Database server or Managed Instance +## Hosting SSISDB in Azure SQL Database server or Managed instance If you host SSISDB in Azure SQL Database server configured with a virtual network service endpoint, make sure that you join your Azure-SSIS IR to the same virtual network and subnet. |
data-factory | Lab Data Flow Data Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/lab-data-flow-data-share.md | In Azure Data Factory linked services define the connection information to exter 1. Using the search bar at the top of the page, search for 'Data Factories' :::image type="content" source="media/lab-data-flow-data-share/portal1.png" alt-text="Portal 1":::-1. Click on your data factory resource to open up its resource blade. +1. Select your data factory resource to open up its resources on the left hand pane. :::image type="content" source="media/lab-data-flow-data-share/portal2.png" alt-text="Portal 2":::-1. Click on **Author and Monitor** to open up the ADF UX. The ADF UX can also be accessed at adf.azure.com. +1. Select **Open Azure Data Factory Studio**. The Data Factory Studio can also be accessed directly at adf.azure.com. - :::image type="content" source="media/lab-data-flow-data-share/portal3.png" alt-text="Portal 3"::: -1. You'll be redirected to the homepage of the ADF UX. This page contains quick-starts, instructional videos, and links to tutorials to learn data factory concepts. To start authoring, click on the pencil icon in left side-bar. + :::image type="content" source="media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the Azure Data Factory home page in the Azure portal."::: ++1. You'll be redirected to the homepage of the ADF UX. This page contains quick-starts, instructional videos, and links to tutorials to learn data factory concepts. To start authoring, select the pencil icon in left side-bar. :::image type="content" source="./media/doc-common-process/get-started-page-author-button.png" alt-text="Portal configure"::: In Azure Data Factory linked services define the connection information to exter 1. To create a linked service, select **Manage** hub in the left side-bar, on the **Connections** pane, select **Linked services** and then select **New** to add a new linked service. :::image type="content" source="media/lab-data-flow-data-share/configure2.png" alt-text="Portal configure 2":::-1. The first linked service you'll configure is an Azure SQL DB. You can use the search bar to filter the data store list. Click on the **Azure SQL Database** tile and click continue. +1. The first linked service you'll configure is an Azure SQL DB. You can use the search bar to filter the data store list. Select on the **Azure SQL Database** tile and select continue. :::image type="content" source="media/lab-data-flow-data-share/configure-4.png" alt-text="Portal configure 4":::-1. In the SQL DB configuration pane, enter 'SQLDB' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished. +1. In the SQL DB configuration pane, enter 'SQLDB' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by selecting **Test connection**. Select **Create** when finished. :::image type="content" source="media/lab-data-flow-data-share/configure5.png" alt-text="Portal configure 5"::: ### Create an Azure Synapse Analytics linked service -1. Repeat the same process to add an Azure Synapse Analytics linked service. In the connections tab, click **New**. Select the **Azure Synapse Analytics** tile and click continue. +1. Repeat the same process to add an Azure Synapse Analytics linked service. In the connections tab, select **New**. Select the **Azure Synapse Analytics** tile and select continue. :::image type="content" source="media/lab-data-flow-data-share/configure-6.png" alt-text="Portal configure 6":::-1. In the linked service configuration pane, enter 'SQLDW' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished. +1. In the linked service configuration pane, enter 'SQLDW' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Select **Create** when finished. :::image type="content" source="media/lab-data-flow-data-share/configure-7.png" alt-text="Portal configure 7"::: ### Create an Azure Data Lake Storage Gen2 linked service -1. The last linked service needed for this lab is an Azure Data Lake Storage gen2. In the connections tab, click **New**. Select the **Azure Data Lake Storage Gen2** tile and click continue. +1. The last linked service needed for this lab is an Azure Data Lake Storage gen2. In the connections tab, select **New**. Select the **Azure Data Lake Storage Gen2** tile and select continue. :::image type="content" source="media/lab-data-flow-data-share/configure8.png" alt-text="Portal configure 8":::-1. In the linked service configuration pane, enter 'ADLSGen2' as your linked service name. If you're using Account key authentication, select your ADLS Gen2 storage account from the **Storage account name** dropdown. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished. +1. In the linked service configuration pane, enter 'ADLSGen2' as your linked service name. If you're using Account key authentication, select your ADLS Gen2 storage account from the **Storage account name** dropdown. You can verify your connection information is correct by clicking **Test connection**. Select **Create** when finished. :::image type="content" source="media/lab-data-flow-data-share/configure9.png" alt-text="Portal configure 9"::: In Azure Data Factory linked services define the connection information to exter In section *Transform data using mapping data flow*, you'll be building mapping data flows. A best practice before building mapping data flows is to turn on debug mode, which allows you to test transformation logic in seconds on an active spark cluster. -To turn on debug, click the **Data flow debug** slider in the top bar of data flow canvas or pipeline canvas when you have **Data flow** activities. Click **OK** when the confirmation dialog is shown. The cluster will start up in about 5 to 7 minutes. Continue on to *Ingest data from Azure SQL DB into ADLS Gen2 using the copy activity* while it is initializing. +To turn on debug, select the **Data flow debug** slider in the top bar of data flow canvas or pipeline canvas when you have **Data flow** activities. Select **OK** when the confirmation dialog is shown. The cluster will start up in about 5 to 7 minutes. Continue on to *Ingest data from Azure SQL DB into ADLS Gen2 using the copy activity* while it is initializing. :::image type="content" source="media/lab-data-flow-data-share/configure10.png" alt-text="Portal configure 10"::: In Azure Data Factory, a pipeline is a logical grouping of activities that toget ### Create a pipeline with a copy activity -1. In the factory resources pane, click on the plus icon to open the new resource menu. Select **Pipeline**. +1. In the factory resources pane, select on the plus icon to open the new resource menu. Select **Pipeline**. :::image type="content" source="media/lab-data-flow-data-share/copy1.png" alt-text="Portal copy 1"::: 1. In the **General** tab of the pipeline canvas, name your pipeline something descriptive such as 'IngestAndTransformTaxiData'. In Azure Data Factory, a pipeline is a logical grouping of activities that toget ### Configure Azure SQL DB source dataset -1. Click on the **Source** tab of the copy activity. To create a new dataset, click **New**. Your source will be the table 'dbo.TripData' located in the linked service 'SQLDB' configured earlier. +1. Select on the **Source** tab of the copy activity. To create a new dataset, select **New**. Your source will be the table 'dbo.TripData' located in the linked service 'SQLDB' configured earlier. :::image type="content" source="media/lab-data-flow-data-share/copy4.png" alt-text="Portal copy 4":::-1. Search for **Azure SQL Database** and click continue. +1. Search for **Azure SQL Database** and select continue. :::image type="content" source="media/lab-data-flow-data-share/copy-5.png" alt-text="Portal copy 5":::-1. Call your dataset 'TripData'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripData' from the table name dropdown. Import the schema **From connection/store**. Click OK when finished. +1. Call your dataset 'TripData'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripData' from the table name dropdown. Import the schema **From connection/store**. Select OK when finished. :::image type="content" source="media/lab-data-flow-data-share/copy6.png" alt-text="Portal copy 6"::: You have successfully created your source dataset. Make sure in the source setti ### Configure ADLS Gen2 sink dataset -1. Click on the **Sink** tab of the copy activity. To create a new dataset, click **New**. +1. Select on the **Sink** tab of the copy activity. To create a new dataset, select **New**. :::image type="content" source="media/lab-data-flow-data-share/copy7.png" alt-text="Portal copy 7":::-1. Search for **Azure Data Lake Storage Gen2** and click continue. +1. Search for **Azure Data Lake Storage Gen2** and select continue. :::image type="content" source="media/lab-data-flow-data-share/copy8.png" alt-text="Portal copy 8":::-1. In the select format pane, select **DelimitedText** as you're writing to a csv file. Click continue. +1. In the select format pane, select **DelimitedText** as you're writing to a csv file. Select continue. :::image type="content" source="media/lab-data-flow-data-share/copy9.png" alt-text="Portal copy 9":::-1. Name your sink dataset 'TripDataCSV'. Select 'ADLSGen2' as your linked service. Enter where you want to write your csv file. For example, you can write your data to file `trip-data.csv` in container `staging-container`. Set **First row as header** to true as you want your output data to have headers. Since no file exists in the destination yet, set **Import schema** to **None**. Click OK when finished. +1. Name your sink dataset 'TripDataCSV'. Select 'ADLSGen2' as your linked service. Enter where you want to write your csv file. For example, you can write your data to file `trip-data.csv` in container `staging-container`. Set **First row as header** to true as you want your output data to have headers. Since no file exists in the destination yet, set **Import schema** to **None**. Select OK when finished. :::image type="content" source="media/lab-data-flow-data-share/copy10.png" alt-text="Portal copy 10"::: ### Test the copy activity with a pipeline debug run -1. To verify your copy activity is working correctly, click **Debug** at the top of the pipeline canvas to execute a debug run. A debug run allows you to test your pipeline either end-to-end or until a breakpoint before publishing it to the data factory service. +1. To verify your copy activity is working correctly, select **Debug** at the top of the pipeline canvas to execute a debug run. A debug run allows you to test your pipeline either end-to-end or until a breakpoint before publishing it to the data factory service. :::image type="content" source="media/lab-data-flow-data-share/copy11.png" alt-text="Portal copy 11":::-1. To monitor your debug run, go to the **Output** tab of the pipeline canvas. The monitoring screen will autorefresh every 20 seconds or when you manually click the refresh button. The copy activity has a special monitoring view, which can be access by clicking the eye-glasses icon in the **Actions** column. +1. To monitor your debug run, go to the **Output** tab of the pipeline canvas. The monitoring screen will autorefresh every 20 seconds or when you manually select the refresh button. The copy activity has a special monitoring view, which can be access by clicking the eye-glasses icon in the **Actions** column. :::image type="content" source="media/lab-data-flow-data-share/copy12.png" alt-text="Portal copy 12"::: 1. The copy monitoring view gives the activity's execution details and performance characteristics. You can see information such as data read/written, rows read/written, files read/written, and throughput. If you have configured everything correctly, you should see 49,999 rows written into one file in your ADLS sink. The data flow created in this step inner joins the 'TripDataCSV' dataset created 1. In the activities pane of the pipeline canvas, open the **Move and Transform** accordion and drag the **Data flow** activity onto the canvas. :::image type="content" source="media/lab-data-flow-data-share/dataflow1.png" alt-text="Portal data flow 1":::-1. In the side pane that opens, select **Create new data flow** and choose **Mapping data flow**. Click **OK**. +1. In the side pane that opens, select **Create new data flow** and choose **Mapping data flow**. Select **OK**. :::image type="content" source="media/lab-data-flow-data-share/dataflow2.png" alt-text="Portal data flow 2"::: 1. You'll be directed to the data flow canvas where you'll be building your transformation logic. In the general tab, name your data flow 'JoinAndAggregateData'. The data flow created in this step inner joins the 'TripDataCSV' dataset created ### Configure your trip data csv source -1. The first thing you want to do is configure your two source transformations. The first source will point to the 'TripDataCSV' DelimitedText dataset. To add a source transformation, click on the **Add Source** box in the canvas. +1. The first thing you want to do is configure your two source transformations. The first source will point to the 'TripDataCSV' DelimitedText dataset. To add a source transformation, select on the **Add Source** box in the canvas. :::image type="content" source="media/lab-data-flow-data-share/dataflow4.png" alt-text="Portal data flow 4":::-1. Name your source 'TripDataCSV' and select the 'TripDataCSV' dataset from the source drop-down. If you remember, you didn't import a schema initially when creating this dataset as there was no data there. Since `trip-data.csv` exists now, click **Edit** to go to the dataset settings tab. +1. Name your source 'TripDataCSV' and select the 'TripDataCSV' dataset from the source drop-down. If you remember, you didn't import a schema initially when creating this dataset as there was no data there. Since `trip-data.csv` exists now, select **Edit** to go to the dataset settings tab. :::image type="content" source="media/lab-data-flow-data-share/dataflow5.png" alt-text="Portal data flow 5":::-1. Go to tab **Schema** and click **Import schema**. Select **From connection/store** to import directly from the file store. 14 columns of type string should appear. +1. Go to tab **Schema** and select **Import schema**. Select **From connection/store** to import directly from the file store. 14 columns of type string should appear. :::image type="content" source="media/lab-data-flow-data-share/dataflow6.png" alt-text="Portal data flow 6":::-1. Go back to data flow 'JoinAndAggregateData'. If your debug cluster has started (indicated by a green circle next to the debug slider), you can get a snapshot of the data in the **Data Preview** tab. Click **Refresh** to fetch a data preview. +1. Go back to data flow 'JoinAndAggregateData'. If your debug cluster has started (indicated by a green circle next to the debug slider), you can get a snapshot of the data in the **Data Preview** tab. Select **Refresh** to fetch a data preview. :::image type="content" source="media/lab-data-flow-data-share/dataflow7.png" alt-text="Portal data flow 7"::: The data flow created in this step inner joins the 'TripDataCSV' dataset created ### Configure your trip fares SQL DB source -1. The second source you're adding will point at the SQL DB table 'dbo.TripFares'. Under your 'TripDataCSV' source, there will be another **Add Source** box. Click it to add a new source transformation. +1. The second source you're adding will point at the SQL DB table 'dbo.TripFares'. Under your 'TripDataCSV' source, there will be another **Add Source** box. Select it to add a new source transformation. :::image type="content" source="media/lab-data-flow-data-share/dataflow8.png" alt-text="Portal data flow 8":::-1. Name this source 'TripFaresSQL'. Click **New** next to the source dataset field to create a new SQL DB dataset. +1. Name this source 'TripFaresSQL'. Select **New** next to the source dataset field to create a new SQL DB dataset. :::image type="content" source="media/lab-data-flow-data-share/dataflow9.png" alt-text="Portal data flow 9":::-1. Select the **Azure SQL Database** tile and click continue. *Note: You may notice many of the connectors in data factory are not supported in mapping data flow. To transform data from one of these sources, ingest it into a supported source using the copy activity*. +1. Select the **Azure SQL Database** tile and select continue. *Note: You may notice many of the connectors in data factory are not supported in mapping data flow. To transform data from one of these sources, ingest it into a supported source using the copy activity*. :::image type="content" source="media/lab-data-flow-data-share/dataflow-10.png" alt-text="Portal data flow 10":::-1. Call your dataset 'TripFares'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripFares' from the table name dropdown. Import the schema **From connection/store**. Click OK when finished. +1. Call your dataset 'TripFares'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripFares' from the table name dropdown. Import the schema **From connection/store**. Select OK when finished. :::image type="content" source="media/lab-data-flow-data-share/dataflow11.png" alt-text="Portal data flow 11"::: 1. To verify your data, fetch a data preview in the **Data Preview** tab. The data flow created in this step inner joins the 'TripDataCSV' dataset created ### Inner join TripDataCSV and TripFaresSQL -1. To add a new transformation, click the plus icon in the bottom-right corner of 'TripDataCSV'. Under **Multiple inputs/outputs**, select **Join**. +1. To add a new transformation, select the plus icon in the bottom-right corner of 'TripDataCSV'. Under **Multiple inputs/outputs**, select **Join**. :::image type="content" source="media/lab-data-flow-data-share/join1.png" alt-text="Portal join 1"::: 1. Name your join transformation 'InnerJoinWithTripFares'. Select 'TripFaresSQL' from the right stream dropdown. Select **Inner** as the join type. To learn more about the different join types in mapping data flow, see [join types](./data-flow-join.md#join-types). - Select which columns you wish to match on from each stream via the **Join conditions** dropdown. To add an additional join condition, click on the plus icon next to an existing condition. By default, all join conditions are combined with an AND operator, which means all conditions must be met for a match. In this lab, we want to match on columns `medallion`, `hack_license`, `vendor_id`, and `pickup_datetime` + Select which columns you wish to match on from each stream via the **Join conditions** dropdown. To add an additional join condition, select on the plus icon next to an existing condition. By default, all join conditions are combined with an AND operator, which means all conditions must be met for a match. In this lab, we want to match on columns `medallion`, `hack_license`, `vendor_id`, and `pickup_datetime` :::image type="content" source="media/lab-data-flow-data-share/join2.png" alt-text="Portal join 2"::: 1. Verify you successfully joined 25 columns together with a data preview. The data flow created in this step inner joins the 'TripDataCSV' dataset created First, you'll create the average fare expression. In the text box labeled **Add or select a column**, enter 'average_fare'. :::image type="content" source="media/lab-data-flow-data-share/agg3.png" alt-text="Portal agg 3":::-1. To enter an aggregation expression, click the blue box labeled **Enter expression**. This will open up the data flow expression builder, a tool used to visually create data flow expressions using input schema, built-in functions and operations, and user-defined parameters. For more information on the capabilities of the expression builder, see the [expression builder documentation](./concepts-data-flow-expression-builder.md). +1. To enter an aggregation expression, select the blue box labeled **Enter expression**. This will open up the data flow expression builder, a tool used to visually create data flow expressions using input schema, built-in functions and operations, and user-defined parameters. For more information on the capabilities of the expression builder, see the [expression builder documentation](./concepts-data-flow-expression-builder.md). - To get the average fare, use the `avg()` aggregation function to aggregate the `total_amount` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `avg(toInteger(total_amount))`. Click **Save and finish** when you're done. + To get the average fare, use the `avg()` aggregation function to aggregate the `total_amount` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `avg(toInteger(total_amount))`. Select **Save and finish** when you're done. :::image type="content" source="media/lab-data-flow-data-share/agg4.png" alt-text="Portal agg 4":::-1. To add an additional aggregation expression, click on the plus icon next to `average_fare`. Select **Add column**. +1. To add an additional aggregation expression, select on the plus icon next to `average_fare`. Select **Add column**. :::image type="content" source="media/lab-data-flow-data-share/agg5.png" alt-text="Portal agg 5"::: 1. In the text box labeled **Add or select a column**, enter 'total_trip_distance'. As in the last step, open the expression builder to enter in the expression. - To get the total trip distance, use the `sum()` aggregation function to aggregate the `trip_distance` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `sum(toInteger(trip_distance))`. Click **Save and finish** when you're done. + To get the total trip distance, use the `sum()` aggregation function to aggregate the `trip_distance` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `sum(toInteger(trip_distance))`. Select **Save and finish** when you're done. :::image type="content" source="media/lab-data-flow-data-share/agg6.png" alt-text="Portal agg 6"::: 1. Test your transformation logic in the **Data Preview** tab. As you can see, there are significantly fewer rows and columns than previously. Only the three groups by and aggregation columns defined in this transformation continue downstream. As there are only five payment type groups in the sample, only five rows are outputted. The data flow created in this step inner joins the 'TripDataCSV' dataset created 1. Now that we have finished our transformation logic, we are ready to sink our data in an Azure Synapse Analytics table. Add a sink transformation under the **Destination** section. :::image type="content" source="media/lab-data-flow-data-share/sink1.png" alt-text="Portal sink 1":::-1. Name your sink 'SQLDWSink'. Click **New** next to the sink dataset field to create a new Azure Synapse Analytics dataset. +1. Name your sink 'SQLDWSink'. Select **New** next to the sink dataset field to create a new Azure Synapse Analytics dataset. :::image type="content" source="media/lab-data-flow-data-share/sink2.png" alt-text="Portal sink 2"::: -1. Select the **Azure Synapse Analytics** tile and click continue. +1. Select the **Azure Synapse Analytics** tile and select continue. :::image type="content" source="media/lab-data-flow-data-share/sink-3.png" alt-text="Portal sink 3":::-1. Call your dataset 'AggregatedTaxiData'. Select 'SQLDW' as your linked service. Select **Create new table** and name the new table dbo.AggregateTaxiData. Click OK when finished +1. Call your dataset 'AggregatedTaxiData'. Select 'SQLDW' as your linked service. Select **Create new table** and name the new table dbo.AggregateTaxiData. Select OK when finished :::image type="content" source="media/lab-data-flow-data-share/sink4.png" alt-text="Portal sink 4"::: 1. Go to the **Settings** tab of the sink. Since we are creating a new table, we need to select **Recreate table** under table action. Unselect **Enable staging**, which toggles whether we are inserting row-by-row or in batch. You have successfully created your data flow. Now it's time to run it in a pipel 1. Go back to the tab for the **IngestAndTransformData** pipeline. Notice the green box on the 'IngestIntoADLS' copy activity. Drag it over to the 'JoinAndAggregateData' data flow activity. This creates an 'on success', which causes the data flow activity to only run if the copy is successful. :::image type="content" source="media/lab-data-flow-data-share/pipeline1.png" alt-text="Portal pipeline 1":::-1. As we did for the copy activity, click **Debug** to execute a debug run. For debug runs, the data flow activity will use the active debug cluster instead of spinning up a new cluster. This pipeline will take a little over a minute to execute. +1. As we did for the copy activity, select **Debug** to execute a debug run. For debug runs, the data flow activity will use the active debug cluster instead of spinning up a new cluster. This pipeline will take a little over a minute to execute. :::image type="content" source="media/lab-data-flow-data-share/pipeline2.png" alt-text="Portal pipeline 2"::: 1. Like the copy activity, the data flow has a special monitoring view accessed by the eyeglasses icon on completion of the activity. You have successfully created your data flow. Now it's time to run it in a pipel 1. In the monitoring view, you can see a simplified data flow graph along with the execution times and rows at each execution stage. If done correctly, you should have aggregated 49,999 rows into five rows in this activity. :::image type="content" source="media/lab-data-flow-data-share/pipeline4.png" alt-text="Portal pipeline 4":::-1. You can click a transformation to get additional details on its execution such as partitioning information and new/updated/dropped columns. +1. You can select a transformation to get additional details on its execution such as partitioning information and new/updated/dropped columns. :::image type="content" source="media/lab-data-flow-data-share/pipeline5.png" alt-text="Portal pipeline 5"::: Once you have created a data share, you'll then switch hats and become the *data > [!IMPORTANT] > Before running the script, you must set yourself as the Active Directory Admin for the SQL Server. -1. Open a new tab and navigate to the Azure portal. Copy the script provided to create a user in the database that you want to share data from. Do this by logging into the EDW database using Query Explorer (preview) using AAD authentication. +1. Open a new tab and navigate to the Azure portal. Copy the script provided to create a user in the database that you want to share data from. Do this by logging into the EDW database using Query Explorer (preview) using Azure AD authentication. You'll need to modify the script so that the user created is contained within brackets. Eg: - create user [dataprovider-xxxx] from external login; + create user [dataprovider-xxxx] from external log in; exec sp_addrolemember db_owner, [dataprovider-xxxx]; 1. Switch back to Azure Data Share where you were adding datasets to your data share. Once you have created a data share, you'll then switch hats and become the *data 1. Select the data share that you created, titled **DataProvider**. You can navigate to it by selecting **Sent Shares** in **Data Share**. -1. Click on Snapshot schedule. You can disable the snapshot schedule if you choose. +1. Select on Snapshot schedule. You can disable the snapshot schedule if you choose. 1. Next, select the **Datasets** tab. You can add additional datasets to this data share after it has been created. Once you have created a data share, you'll then switch hats and become the *data Now that we have reviewed our data share, we are ready to switch context and wear our data consumer hat. -You should now have an Azure Data Share invitation in your inbox from Microsoft Azure. Launch Outlook Web Access (outlook.com) and log in using the credentials supplied for your Azure subscription. +You should now have an Azure Data Share invitation in your inbox from Microsoft Azure. Launch Outlook Web Access (outlook.com) and log on using the credentials supplied for your Azure subscription. -In the e-mail that you should have received, click on "View invitation >". At this point, you're going to be simulating the data consumer experience when accepting a data providers invitation to their data share. +In the e-mail that you should have received, select on "View invitation >". At this point, you're going to be simulating the data consumer experience when accepting a data providers invitation to their data share. :::image type="content" source="media/lab-data-flow-data-share/email-invite.png" alt-text="Email invitation"::: You may be prompted to select a subscription. Make sure you select the subscription you have been working in for this lab. -1. Click on the invitation titled *DataProvider*. +1. Select on the invitation titled *DataProvider*. 1. In this Invitation screen, you'll notice various details about the data share that you configured earlier as a data provider. Review the details and accept the terms of use if provided. You may be prompted to select a subscription. Make sure you select the subscript 1. Select **Query editor (preview)** -1. Use AAD authentication to log in to Query editor. +1. Use Azure AD authentication to log on to Query editor. 1. Run the query provided in your data share (copied to clipboard in step 14). |
data-factory | Load Azure Data Lake Storage Gen2 From Gen1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md | This article shows you how to use the Data Factory copy data tool to copy data f ## Create a data factory -1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**. - - :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Screenshot showing the Data Factory selection in the New pane."::: +1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal. -2. On the **New data factory** page, provide values for the fields that are shown in the following image: - - :::image type="content" source="./media/load-azure-data-lake-storage-gen2-from-gen1/new-azure-data-factory.png" alt-text="Screenshot showing the New Data factory page."::: - - * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadADLSDemo\" is not available," enter a different name for the data factory. For example, use the name _**yourname**_**ADFTutorialDataFactory**. Create the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). - * **Subscription**: Select your Azure subscription in which to create the data factory. - * **Resource Group**: Select an existing resource group from the drop-down list. You also can select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md). - * **Version**: Select **V2**. - * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by the data factory can be in other locations and regions. --3. Select **Create**. -4. After creation is finished, go to your data factory. You see the **Data Factory** home page as shown in the following image: - :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: -5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. +1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. ## Load data into Azure Data Lake Storage Gen2 As a best practice, conduct a performance POC with a representative sample datas 3. If you have maximized the performance of a single copy activity, but have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel. -When you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries impact your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the DIU. +When you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries can degrade your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the DIU. ### Delta data migration |
data-factory | Load Azure Data Lake Storage Gen2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2.md | This article shows you how to use the Data Factory Copy Data tool to load data f ## Create a data factory -1. On the left menu, select **Create a resource** > **Integration** > **Data Factory**: - - :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the "New" pane"::: +1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal. -2. In the **New data factory** page, provide values for following fields: - - * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name *YourDataFactoryName* is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). - * **Subscription**: Select your Azure subscription in which to create the data factory. - * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md). - * **Version**: Select **V2**. - * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. --3. Select **Create**. --4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image: - :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: - Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab. +1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. ## Load data into Azure Data Lake Storage Gen2 |
data-factory | Load Azure Data Lake Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-store.md | This article shows you how to use the Data Factory Copy Data tool to _load data ## Create a data factory -1. On the left menu, select **Create a resource** > **Analytics** > **Data Factory**: - - :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the "New" pane"::: --2. In the **New data factory** page, provide values for the fields that are shown in the following image: - - :::image type="content" source="./media/load-data-into-azure-data-lake-store//new-azure-data-factory.png" alt-text="New data factory page"::: - - * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadADLSG1Demo\" is not available," enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). - * **Subscription**: Select your Azure subscription in which to create the data factory. - * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md). - * **Version**: Select **V2**. - * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Storage Gen1, Azure Storage, Azure SQL Database, and so on. --3. Select **Create**. -4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image: - +1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal. + :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: - Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab. +1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. ## Load data into Data Lake Storage Gen1 This article shows you how to use the Data Factory Copy Data tool to _load data 2. In the **Properties** page, specify **CopyFromAmazonS3ToADLS** for the **Task name** field, and select **Next**: :::image type="content" source="./media/load-data-into-azure-data-lake-store/copy-data-tool-properties-page.png" alt-text="Properties page":::-3. In the **Source data store** page, click **+ Create new connection**: +3. In the **Source data store** page, select **+ Create new connection**: :::image type="content" source="./media/load-data-into-azure-data-lake-store/source-data-store-page.png" alt-text="Source data store page"::: This article shows you how to use the Data Factory Copy Data tool to _load data :::image type="content" source="./media/load-data-into-azure-data-lake-store/specify-binary-copy.png" alt-text="Screenshot shows the Choose the input file or folder where you can select Copy file recursively and Binary Copy."::: -7. In the **Destination data store** page, click **+ Create new connection**, and then select **Azure Data Lake Storage Gen1**, and select **Continue**: +7. In the **Destination data store** page, select **+ Create new connection**, and then select **Azure Data Lake Storage Gen1**, and select **Continue**: :::image type="content" source="./media/load-data-into-azure-data-lake-store/destination-data-storage-page.png" alt-text="Destination data store page"::: |
data-factory | Load Azure Sql Data Warehouse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-sql-data-warehouse.md | This article shows you how to use the Copy Data tool to _load data from Azure SQ ## Create a data factory -> [!NOTE] -> You can skip the creation of a new data factory if you wish to use the pipelines feature within your existing Synapse workspace to load the data. Azure Synapse embeds the functionality of Azure Data Factory within its pipelines feature. --1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**: --2. On the **New data factory** page, provide values for following items: -- * **Name**: Enter *LoadSQLDWDemo* for name. The name for your data factory must be *globally unique. If you receive the error "Data factory name 'LoadSQLDWDemo' is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). - * **Subscription**: Select your Azure subscription in which to create the data factory. - * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md). - * **Version**: Select **V2**. - * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Store, Azure Storage, Azure SQL Database, and so on. --3. Select **Create**. -4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image: +1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal. - :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: + :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: - Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab. +1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. ## Load data into Azure Synapse Analytics |
data-factory | Load Office 365 Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-office-365-data.md | This article shows you how to use the Data Factory _load data from Microsoft 365 ## Create a data factory -1. On the left menu, select **Create a resource** > **Analytics** > **Data Factory**: - - :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the "New" pane"::: +1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal. -2. In the **New data factory** page, provide values for the fields that are shown in the following image: - - :::image type="content" source="./media/load-office-365-data/new-azure-data-factory.png" alt-text="New data factory page"::: - - * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name *LoadFromOffice365Demo* is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**LoadFromOffice365Demo**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md). - * **Subscription**: Select your Azure subscription in which to create the data factory. - * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md). - * **Version**: Select **V2**. - * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Store, Azure Storage, Azure SQL Database, and so on. --3. Select **Create**. -4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image: - :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile."::: -5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab. +1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab. ## Create a pipeline This article shows you how to use the Data Factory _load data from Microsoft 365 ### Configure source -1. Go to the pipeline > **Source tab**, click **+ New** to create a source dataset. +1. Go to the pipeline > **Source tab**, select **+ New** to create a source dataset. 2. In the New Dataset window, select **Microsoft 365 (Office 365)**, and then select **Continue**. -3. You are now in the copy activity configuration tab. Click on the **Edit** button next to the Microsoft 365 (Office 365) dataset to continue the data configuration. +3. You are now in the copy activity configuration tab. Select on the **Edit** button next to the Microsoft 365 (Office 365) dataset to continue the data configuration. :::image type="content" source="./media/load-office-365-data/transition-to-edit-dataset.png" alt-text="Config Microsoft 365 (Office 365) dataset general."::: 4. You see a new tab opened for Microsoft 365 (Office 365) dataset. In the **General tab** at the bottom of the Properties window, enter "SourceOffice365Dataset" for Name. -5. Go to the **Connection tab** of the Properties window. Next to the Linked service text box, click **+ New**. +5. Go to the **Connection tab** of the Properties window. Next to the Linked service text box, select **+ New**. 6. In the New Linked Service window, enter "Office365LinkedService" as name, enter the service principal ID and service principal key, then test connection and select **Create** to deploy the linked service. This article shows you how to use the Data Factory _load data from Microsoft 365 9. You are required to choose one of the date filters and provide the start time and end time values. -10. Click on the **Import Schema** tab to import the schema for Message dataset. +10. Select on the **Import Schema** tab to import the schema for Message dataset. :::image type="content" source="./media/load-office-365-data/edit-source-properties.png" alt-text="Config Microsoft 365 (Office 365) dataset schema."::: This article shows you how to use the Data Factory _load data from Microsoft 365 2. In the New Dataset window, notice that only the supported destinations are selected when copying from Microsoft 365 (Office 365). Select **Azure Blob Storage**, select Binary format, and then select **Continue**. In this tutorial, you copy Microsoft 365 (Office 365) data into an Azure Blob Storage. -3. Click on **Edit** button next to the Azure Blob Storage dataset to continue the data configuration. +3. Select on **Edit** button next to the Azure Blob Storage dataset to continue the data configuration. 4. On the **General tab** of the Properties window, in Name, enter "OutputBlobDataset". To see activity runs associated with the pipeline run, select the **View Activit :::image type="content" source="./media/load-office-365-data/activity-status.png" alt-text="Monitor activity"::: -If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as **In Progress**, and only when you click into "Details" link under Actions will you see the status as **RequesetingConsent**. A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed. +If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as **In Progress**, and only when you select into "Details" link under Actions will you see the status as **RequesetingConsent**. A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed. _Status as requesting consent:_ :::image type="content" source="./media/load-office-365-data/activity-details-request-consent.png" alt-text="Activity execution details - request consent"::: |
data-factory | Load Sap Bw Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-sap-bw-data.md | |
data-factory | Manage Azure Ssis Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md | description: Learn how to reconfigure an Azure-SSIS integration runtime in Azure Previously updated : 02/17/2022 Last updated : 08/12/2022 |
data-factory | Managed Virtual Network Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md | Unlike copy activity, pipeline and external activity have a default time to live ### Comparison of different TTL The following table lists the differences between different types of TTL: -| | Interactive authoring | Copy compute scale | Pipeline & External compute scale | +| Feature | Interactive authoring | Copy compute scale | Pipeline & External compute scale | | -- | - | -- | | | When to take effect | Immediately after enablement | First activity execution | First activity execution | | Can be disabled | Y | Y | N | |
data-factory | Monitor Configure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-configure-diagnostics.md | -1. In the Azure portal, go to **Monitor**. Select **Settings** > **Diagnostics settings**. --1. Select the data factory for which you want to set a diagnostic setting. --1. If no settings exist on the selected data factory, you're prompted to create a setting. Select **Turn on diagnostics**. -- :::image type="content" source="media/data-factory-monitor-oms/monitor-oms-image1.png" alt-text="Screenshot that shows creating a diagnostic setting if no settings exist."::: -- If there are existing settings on the data factory, you see a list of settings already configured on the data factory. Select **Add diagnostic setting**. +1. In the Azure portal, navigate to your data factory and select **Diagnostics** on the left navigation pane to see the diagnostics settings. If there are existing settings on the data factory, you see a list of settings already configured. Select **Add diagnostic setting**. :::image type="content" source="media/data-factory-monitor-oms/add-diagnostic-setting.png" alt-text="Screenshot that shows adding a diagnostic setting if settings exist."::: |
data-factory | Monitor Integration Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-integration-runtime.md | description: Learn how to monitor different types of integration runtime in Azur Previously updated : 10/27/2021 Last updated : 08/12/2022 The **NODE SIZE** informational tile shows the SKU (SSIS edition_VM tier_VM seri The **RUNNING / REQUESTED NODE(S)** informational tile compares the number of nodes currently running to the total number of nodes previously requested for your Azure-SSIS IR. -The **DUAL STANDBY PAIR / ROLE** informational tile shows the name of your dual standby Azure-SSIS IR pair that works in sync with Azure SQL Database managed instance failover group for business continuity and disaster recovery (BCDR) and the current primary/secondary role of your Azure-SSIS IR. When SSISDB failover occurs, your primary and secondary Azure-SSIS IRs will swap roles (see [Configuring your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md)). +The **DUAL STANDBY PAIR / ROLE** informational tile shows the name of your dual standby Azure-SSIS IR pair that works in sync with Azure SQL Managed Instance failover group for business continuity and disaster recovery (BCDR) and the current primary/secondary role of your Azure-SSIS IR. When SSISDB failover occurs, your primary and secondary Azure-SSIS IRs will swap roles (see [Configuring your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md)). The functional tiles are described in more details below. |
data-factory | Monitor Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md | description: Learn how to monitor a pipeline in a data factory by using differen Previously updated : 01/26/2022 Last updated : 08/12/2022 |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Tutorial Deploy Ssis Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-virtual-network.md | After you've configured a virtual network, you can join your Azure-SSIS IR to th :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/stop-ir-button.png" alt-text="Stop the IR"::: -1. In the **Integration Runtimes** list, in the **Actions** column, select the **Edit** button for your Azure-SSIS IR. -- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtime-edit.png" alt-text="Edit the integration runtime"::: +1. In the **Integration Runtimes** list, in the **Actions** column, select your Azure-SSIS IR to edit it. 1. On the **Integration runtime setup** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Next** button. |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
ddos-protection | Manage Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md | To enable DDoS protection for a virtual network, your account must also be assig Creation of more than one plan is not required for most organizations. A plan cannot be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one. -For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection Standard plans](https://aka.ms/ddosrestrictplan). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance. +For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection Standard plans](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Policy%20Definitions/Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance. ## Next steps |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Auto Deploy Azure Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md | The Azure Monitor Agent requires additional extensions. The ASA extension, which ### Additional security events collection -When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Log Analytics agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel. +When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Azure Monitor agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel. -If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](/azure-monitor/essentials/data-collection-rule-overview) to collect the required events. +If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](/azure/azure-monitor/essentials/data-collection-rule-overview) to collect the required events. Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](enhanced-security-features-overview.md#faqpricing-and-billing) daily on defined data types that include security events. |
defender-for-cloud | Defender For Sql Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md | Last updated 07/28/2022 # Enable Microsoft Defender for SQL servers on machines -This Microsoft Defender plan detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. +This Microsoft Defender plan detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases on the SQL server. You'll see alerts when there are suspicious database activities, potential vulnerabilities, or SQL injection attacks, and anomalous database access and query patterns. Microsoft Defender for SQL servers on machines extends the protections for your - On-premises SQL servers: - - [Azure Arc-enabled SQL Server (preview)](/sql/sql-server/azure-arc/overview) + - [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview) - [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md) |
defender-for-cloud | Enable Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md | We recommend enabling auto provisioning, but it's disabled by default. ## How does auto provisioning work? -Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type. +Defender for Cloud's auto provisioning settings page has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type. > [!TIP]-> Learn more about Azure Policy effects including deploy if not exists in [Understand Azure Policy effects](../governance/policy/concepts/effects.md). +> Learn more about Azure Policy effects including **Deploy if not exists** in [Understand Azure Policy effects](../governance/policy/concepts/effects.md). <a name="auto-provision-mma"></a> |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | You can check out the following blogs: ## Next steps -Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page: +Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages: - [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector) |
defender-for-cloud | Quickstart Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md | Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST ## Next steps -Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page: +Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages: - [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)-- [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy)--Learn about the Google Cloud resource hierarchy in Google's online docs+- [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) - Learn about the Google Cloud resource hierarchy in Google's online docs +- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector) |
defender-for-iot | How To Manage Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md | Delete all sensors that are associated with the subscription prior to removing t > [!NOTE] > To remove Enterprise IoT only from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan). +> [!IMPORTANT] +> If you are a Microsoft Defender for IoT customer and also have a subscription to Microsoft Defender for Endpoint, the data collected by Microsoft Defender for IoT will automatically populate in your Microsoft Defender for Endpoint instance as well. Customers who want to delete their data from Defender for IoT must also delete their data from Defender for Endpoint. + ## Move existing sensors to a different subscription Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan and register the sensors under the new subscription, and then remove them from the old subscription. This process may include some downtime, and historic data isn't migrated. |
defender-for-iot | References Work With Defender For Iot Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md | This section describes on-premises management console APIs for: ### Version 3 -- [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](#servicenow-integration-apiexternalv3integration-preview)+- [Request devices - /external/v3/integration/devices/{timestamp}](#request-devicesexternalv3integrationdevicestimestamp) ++- [Request device connection events - /external/v3/integration/connections/{timestamp}](#request-device-connection-eventsexternalv3integrationconnectionstimestamp) ++- [Request device data by device ID - /external/v3/integration/device/{deviceId}](#request-device-data-by-device-idexternalv3integrationdevicedeviceid) ++- [Request deleted devices - /external/v3/integration/deleteddevices/{timestamp}](#request-deleted-devicesexternalv3integrationdeleteddevicestimestamp) ++- [Request sensor data - external/v3/integration/sensors](#request-sensor-dataexternalv3integrationsensors) ++- [Request all device CVEs - /external/v3/integration/devicecves/{timestamp}](#request-all-device-cvesexternalv3integrationdevicecvestimestamp) All parameters in Version 3 APIs are optional. Example: |-|-|-| |GET|`curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/external/v2/alerts/pcap/<ID>'`|`curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https://10.1.0.1/external/v2/alerts/pcap/1'` -### ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview) +### Request devices - /external/v3/integration/devices/{timestamp} -The below API's can be used with the ServiceNow integration via the ServiceNow's Service Graph Connector for Defender for IoT. +This API returns data about all devices that were updated after the given timestamp. -### devices +#### Method -This API returns data about all devices that were updated after the given timestamp. +- **GET** ++#### Path parameters -#### Request +- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned. -- Path: ΓÇ£/devices/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned. +#### Query parameters -- Query parameters:- - ΓÇ£**sensorId**ΓÇ¥ - use this parameter to get only devices seen by a specific sensor. The ID should be taken from the results of the Sensors API. - - ΓÇ£**notificationType**ΓÇ¥ - should be a number, from the following mapping: - - 0 ΓÇô both updated and new devices (default). - - 1 ΓÇô only new devices. - - 2 ΓÇô only updated devices. - - ΓÇ£**page**ΓÇ¥ - the page number, from the result set (first page is 0, default value is 0) - - ΓÇ£**size**ΓÇ¥ - the page size (default value is 50) +- **sensorId** - use this parameter to get only devices seen by a specific sensor. The ID should be taken from the results of the [sensor](#request-sensor-dataexternalv3integrationsensors) API. +- **notificationType** - should be a number, from the following mapping: + - **0** ΓÇô both updated and new devices (default). + - **1** ΓÇô only new devices. + - **2** ΓÇô only updated devices. +- **page** - the page number, from the result set (first page is 0, default value is 0). +- **size** - the page size (default value is 50). -#### Response +#### Response type -- Type: JSON-- Structure:- - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages. - - ΓÇ£**u_devices**ΓÇ¥ - array of device objects. Each object is defined with the parameters listed in the [device](#device) API. +- **JSON** -### Connections +#### Response structure -This API returns data about all device connections that were updated after the given timestamp. +- **u_count** - amount of objects in the full result sets, including all pages. +- **u_devices** - array of device objects. Each object is defined with the parameters listed in the [device ID](#request-device-data-by-device-idexternalv3integrationdevicedeviceid) API. -#### Request +### Request device connection events - /external/v3/integration/connections/{timestamp} -- Path: ΓÇ£/connections/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned. -- Query parameters:- - ΓÇ£**page**ΓÇ¥ - the page number, from the result set (default value is 1) - - ΓÇ£**size**ΓÇ¥ - the page size (default value is 50) +This API returns data about all device connection events that were updated after the given timestamp. -#### Response +#### Method ++- **GET** ++#### Path parameters ++- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned. ++#### Query parameters ++- **page** - the page number, from the result set (default value is 1). +- **size** - the page size (default value is 50). -- Type: JSON-- Structure: - - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages. - - ΓÇ£**u_connections**ΓÇ¥ - array of - - ΓÇ£**u_src_device_id**ΓÇ¥ - the ID of the source device. - - ΓÇ£**u_dest_device_id**ΓÇ¥ - the ID of the destination device. - - ΓÇ£**u_connection_type**ΓÇ¥ - one of the following: - - ΓÇ£**One Way**ΓÇ¥ - - ΓÇ£**Two Way**ΓÇ¥ - - ΓÇ£**Multicast**ΓÇ¥ +#### Response type ++- **JSON** ++#### Response structure ++- **u_count** - amount of object in the full result sets, including all pages. +- **u_connections** - array of: + - **u_src_device_id** - the ID of the source device. + - **u_dest_device_id** - the ID of the destination device. + - **u_connection_type** - one of the following: + - **One Way** + - **Two Way** + - **Multicast** -### device +### Request device data by device ID - /external/v3/integration/device/{deviceId} This API returns data about a specific device per a given device ID. -#### Request +#### Method -- Path: ΓÇ£/device/{deviceId}ΓÇ¥-- Method type: GET-- Path parameters:- - ΓÇ£**deviceId**ΓÇ¥ ΓÇô the ID of the requested device. +- **GET** ++#### Path parameters ++- **deviceId** ΓÇô the ID of the requested device. #### Response -- Type: JSON-- Structure:- - ΓÇ£**u_id**ΓÇ¥ - the internal ID of the device. - - ΓÇ£**u_vendor**ΓÇ¥ - the name of the vendor. - - ΓÇ£**u_mac_address_objects**ΓÇ¥ - array of - - ΓÇ£**u_mac_address**ΓÇ¥ - mac address of the device. - - ΓÇ£**u_ip_address_objects**ΓÇ¥ - array of - - ΓÇ£**u_ip_address**ΓÇ¥ - IP address of the device. - - ΓÇ£**u_guessed_mac_addresses**ΓÇ¥ - array of - - ΓÇ£**u_mac_address**ΓÇ¥ - guessed mac address. - - ΓÇ£**u_name**ΓÇ¥ - the name of the device. - - ΓÇ£**u_last_activity**ΓÇ¥ - timestamp of the last time the device was active. - - ΓÇ£**u_first_discovered**ΓÇ¥ - timestamp of the discovery time of the device. - - ΓÇ£**u_last_update**ΓÇ¥ - timestamp of the last update time of the device. - - ΓÇ£**u_vlans**ΓÇ¥ - array of - - ΓÇ£**u_vlan**ΓÇ¥ - vlan in which the device is in. - - ΓÇ£**u_device_type**ΓÇ¥ - - - ΓÇ£**u_name**ΓÇ¥ - the device type - - ΓÇ£**u_purdue_layer**ΓÇ¥ - the default purdue layer for this device type. - - ΓÇ£**u_category**ΓÇ¥ - will be one of the following: - - ΓÇ£**IT**ΓÇ¥ - - ΓÇ£**ICS**ΓÇ¥ - - ΓÇ£**IoT**ΓÇ¥ - - ΓÇ£**Network**ΓÇ¥ - - ΓÇ£**u_operating_system**ΓÇ¥ - the device operating system. - - ΓÇ£**u_protocol_objects**ΓÇ¥ - array of - - ΓÇ£**u_protocol**ΓÇ¥ - protocol the device uses. - - ΓÇ£**u_purdue_layer**ΓÇ¥ - the purdue layer that was manually set by the user. - - ΓÇ£**u_sensor_ids**ΓÇ¥ - array of - - ΓÇ£**u_sensor_id**ΓÇ¥ - the ID of the sensor that saw the device. - - ΓÇ£**u_device_urls**ΓÇ¥ - array of - - ΓÇ£**u_device_url**ΓÇ¥ the URL to view the device in the sensor. - - ΓÇ£**u_firmwares**ΓÇ¥ - array of - - ΓÇ£**u_address**ΓÇ¥ - - ΓÇ£**u_module_address**ΓÇ¥ - - ΓÇ£**u_serial**ΓÇ¥ - - ΓÇ£**u_model**ΓÇ¥ - - ΓÇ£**u_version**ΓÇ¥ - - ΓÇ£**u_additional_data**" --### Deleted devices --#### Request --- Path: ΓÇ£/deleteddevices/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned. +- **JSON** ++#### Response Structure ++- **u_id** - the internal ID of the device. +- **u_vendor** - the name of the vendor. +- **u_mac_address_objects** - array of: + - **u_mac_address** - mac address of the device. +- **u_ip_address_objects** - array of: + - **u_ip_address** - IP address of the device. + - **u_guessed_mac_addresses** - array of: + - ΓÇ£**u_mac_address** - guessed mac address. +- **u_name** - the name of the device. +- **u_last_activity** - timestamp of the last time the device was active. +- **u_first_discovered** - timestamp of the discovery time of the device. +- **u_last_update** - timestamp of the last update time of the device. +- **u_vlans** - array of: + - **u_vlan** - vlan in which the device is in. +- **u_device_type** - array of: + - **u_name** - the device type. + - **u_purdue_layer** - the default purdue layer for this device type. + - **u_category** - will be one of the following: + - **IT** + - **ICS** + - **IoT** + - **Network** +- **u_operating_system** - the device operating system. +- **u_protocol_objects** - array of: + - **u_protocol** - protocol the device uses. +- **u_purdue_layer** - the purdue layer that was manually set by the user. +- **u_sensor_ids** - array of: + - **u_sensor_id** - the ID of the sensor that saw the device. +- **u_device_urls** - array of: + - **u_device_url** the URL to view the device in the sensor. +- **u_firmwares** - array of: + - **u_address** + - **u_module_address** + - **u_serial** + - **u_model** + - **u_version** + - **u_additional_data** ++### Request deleted devices - /external/v3/integration/deleteddevices/{timestamp} ++This API returns data about deleted devices after the given timestamp. ++#### Method ++- **GET** ++#### Path parameters ++- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned. #### Response -- Type: JSON-- Structure:- - Array of - - ΓÇ£**u_id**ΓÇ¥ - the ID of the deleted device. +- **JSON** ++#### Response structure: + +Array of: +- **u_id** - the ID of the deleted device. ++### Request sensor data - external/v3/integration/sensors -### sensors +This API returns data about the sensor. -#### Request +#### Method -- Path: ΓÇ£/sensorsΓÇ¥-- Method type: GET+- **GET** #### Response -- Type: JSON-- Structure:- - Array of - - ΓÇ£**u_id**ΓÇ¥ - internal sensor ID, to be used in the devices API. - - ΓÇ£**u_name**ΓÇ¥ - the name of the appliance. - - ΓÇ£**u_connection_state**ΓÇ¥ - connectivity with the CM state. One of the following: - - ΓÇ£**SYNCED**ΓÇ¥ - Connection is successful. - - ΓÇ£**OUT_OF_SYNC**ΓÇ¥ - Management console cannot process data received from Sensor. - - ΓÇ£**TIME_DIFF_OFFSET**ΓÇ¥ - Time drift detected. management console has been disconnected from Sensor. - - ΓÇ£**DISCONNECTED**ΓÇ¥ - Sensor not communicating with management console. Check network connectivity. - - ΓÇ£**u_interface_address**ΓÇ¥ - the network address of the appliance. - - ΓÇ£**u_version**ΓÇ¥ - string representation of the sensorΓÇÖs version. - - ΓÇ£**u_alert_count**ΓÇ¥ - number of alerts found by the sensor. - - ΓÇ£**u_device_count**ΓÇ¥ - number of devices discovered by the sensor. - - ΓÇ£**u_unhandled_alert_count**ΓÇ¥ - number of unhandled alerts in the sensor. - - ΓÇ£**u_is_activated**ΓÇ¥ - is the alert activated. - - ΓÇ£**u_data_intelligence_version**ΓÇ¥ - string representation of the data intelligence installed in the sensor. - - ΓÇ£**u_remote_upgrade_stage**ΓÇ¥ - the state of the remote upgrade. One of the following: - - "**UPLOADING**" - - "**PREPARE_TO_INSTALL**" - - "**STOPPING_PROCESSES**" - - "**BACKING_UP_DATA**" - - "**TAKING_SNAPSHOT**" - - "**UPDATING_CONFIGURATION**" - - "**UPDATING_DEPENDENCIES**" - - "**UPDATING_LIBRARIES**" - - "**PATCHING_DATABASES**" - - "**STARTING_PROCESSES**" - - "**VALIDATING_SYSTEM_SANITY**" - - "**VALIDATION_SUCCEEDED_REBOOTING**" - - "**SUCCESS**" - - "**FAILURE**" - - "**UPGRADE_STARTED**" - - "**STARTING_INSTALLATION**" - - "**INSTALLING_OPERATING_SYSTEM**" - - ΓÇ£**u_uid**ΓÇ¥ - globally unique identifier of the sensor - - "**u_is_in_learning_mode**" - Boolean indication as to whether the sensor is in Learn mode or not --### devicecves --#### Request --- Path: ΓÇ£/devicecves/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned. -- Query parameters:- - ΓÇ£**page**ΓÇ¥ - Defines the page number, from the result set (first page is 0, default value is 0) - - ΓÇ£**size**ΓÇ¥ - Defines the page size (default value is 50) - - ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID. - - ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**. - - ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456** +- **JSON** ++#### Response structure ++Array of: ++- **u_id** - internal sensor ID, to be used in the devices API. +- **u_name** - the name of the appliance. +- **u_connection_state** - connectivity with the CM state. One of the following: + - **SYNCED** - connection is successful. + - **OUT_OF_SYNC** - management console cannot process data received from the sensor. + - **TIME_DIFF_OFFSET** - time drift detected. management console has been disconnected from the sensor. + - **DISCONNECTED** - sensor not communicating with management console. Check network connectivity. +- **u_interface_address** - the network address of the appliance. +- **u_version** - string representation of the sensorΓÇÖs version. +- **u_alert_count** - number of alerts found by the sensor. +- **u_device_count** - number of devices discovered by the sensor. +- **u_unhandled_alert_count** - number of unhandled alerts in the sensor. +- **u_is_activated** - is the alert activated. +- **u_data_intelligence_version** - string representation of the data intelligence installed in the sensor. +- **u_remote_upgrade_stage** - the state of the remote upgrade. Will be one of the following: + - **UPLOADING** + - **PREPARE_TO_INSTALL** + - **STOPPING_PROCESSES** + - **BACKING_UP_DATA** + - **TAKING_SNAPSHOT** + - **UPDATING_CONFIGURATION** + - **UPDATING_DEPENDENCIES** + - **UPDATING_LIBRARIES** + - **PATCHING_DATABASES** + - **STARTING_PROCESSES** + - **VALIDATING_SYSTEM_SANITY** + - **VALIDATION_SUCCEEDED_REBOOTING** + - **SUCCESS** + - **FAILURE** + - **UPGRADE_STARTED** + - **STARTING_INSTALLATION** + - **INSTALLING_OPERATING_SYSTEM** +- **u_uid** - globally unique identifier of the sensor. +- **u_is_in_learning_mode** - boolean indication as to whether the sensor is in Learn mode or not. ++### Request all device CVEs - /external/v3/integration/devicecves/{timestamp} ++This API returns data about device CVEs after the given timestamp. ++#### Method ++- **GET** ++#### Path parameters ++- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned. ++#### Query parameters ++- **page** - defines the page number, from the result set (first page is 0, default value is 0). +- **size** - defines the page size (default value is 50). +- **sensorId** - shows results from a specific sensor, as defined by the given sensor ID. +- **score** - determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. (default value is 0). +- **deviceIds** - a comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456** #### Response -- Type: JSON-- Structure:- - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages. - - ΓÇ£**u_id**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_name**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_ip_address_objects**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_mac_address_objects**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_last_activity**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_last_update**ΓÇ¥ - the same as in the specific device API. - - ΓÇ£**u_cves**ΓÇ¥ - an array of CVEs: - - ΓÇ£**u_ip_address**ΓÇ¥ - the IP address of the specific interface with the specific firmware on which the CVE was detected. - - ΓÇ£**u_cve_id**ΓÇ¥- the ID of the CVE - - ΓÇ£**u_score**ΓÇ¥- the risk score of the CVE - - ΓÇ£**u_attack_vector**ΓÇ¥ - one of the following: - - "**ADJACENT_NETWORK**" - - "**LOCAL**" - - "**NETWORK**" - - ΓÇ£**u_description**ΓÇ¥ - description about the CVE. +- **JSON** ++#### Response structure ++- **u_count** - amount of objects in the full result sets, including all pages. +- **u_id** - the same as in the specific device API. +- **u_name** - the same as in the specific device API. +- **u_ip_address_objects** - the same as in the specific device API. +- **u_mac_address_objects** - the same as in the specific device API. +- **u_last_activity** - the same as in the specific device API. +- **u_last_update** - the same as in the specific device API. +- **u_cves** - an array of CVEs: + - **u_ip_address** - the IP address of the specific interface with the specific firmware on which the CVE was detected. + - **u_cve_id**- the ID of the CVE. + - **u_score**- the risk score of the CVE. + - **u_attack_vector** - one of the following: + - **ADJACENT_NETWORK** + - **LOCAL** + - **NETWORK** + - **u_description** - description of the CVE. ## Next steps |
defender-for-iot | Release Notes Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md | Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Term The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT. -- The on-premises management console, has a new [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview).+- The on-premises management console, has new [integration APIs](references-work-with-defender-for-iot-apis.md#version-3). - Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors. |
defender-for-iot | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md | Now you can add any of the following parameters to your query to fine tune your - ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**. - ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456** -For more information, see [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview). ->>>>>>> 3e9c47c4758cdb6f63a6873219cab9498206cb2a +For more information, see [Management console APIs - Version 3](references-work-with-defender-for-iot-apis.md#version-3). ### OT appliance hardware profile updates |
defender-for-iot | Tutorial Getting Started Eiot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md | Defender for IoT supports the entire breadth of IoT devices in your environment, In this tutorial, you learn about: > [!div class="checklist"]-> * Integrating with Microsoft Defender for Endpoint +> * Integration with Microsoft Defender for Endpoint > * Prerequisites for Enterprise IoT network monitoring with Defender for IoT > * How to prepare a physical appliance or VM as a network sensor > * How to onboard an Enterprise IoT sensor and install software > * How to view detected Enterprise IoT devices in the Azure portal > * How to view devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint -> [!IMPORTANT] -> The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Microsoft Defender for Endpoint integration -Integrate with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices. +Defender for IoT integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices. -After you've onboarded a plan for Enterprise IoT and set up your Enterprise IoT network sensor, your device data integrates automatically with Microsoft Defender for Endpoint. +Microsoft 365 P2 customers can onboard a plan for Enterprise IoT through the Microsoft Defender for Endpoint portal. After you've onboarded a plan for Enterprise IoT, view discovered IoT devices and related alerts, vulnerabilities, and recommendations in Defender for Endpoint. -- Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals.-- In Defender for Endpoint, view discovered IoT devices and related alerts, vulnerabilities, and recommendations.+Microsoft 365 P2 customers can also install the Enterprise IoT network sensor (currently in **Public Preview**) to gain more visibility into additional IoT segments of the corporate network that were not previously covered by Defender for Endpoint. Deploying a network sensor is not a prerequisite for onboarding Enterprise IoT. For more information, see [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). +> [!IMPORTANT] +> The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + ## Prerequisites Before starting this tutorial, make sure that you have the following prerequisites. Alternately, remove your sensor manually from the CLI. For more information, see For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). - ## Next steps Continue viewing device data in both the Azure portal and Defender for Endpoint, depending on your organization's needs. - - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md) Continue viewing device data in both the Azure portal and Defender for Endpoint, In Defender for Endpoint, also view alerts data, recommendations and vulnerabilities related to your network traffic. -For more information in Defender for Endpoint documentation, see: +For more information in the Defender for Endpoint documentation, see: - [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) - [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview) |
digital-twins | Reference Query Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-functions.md | The following query returns all digital twins whose IDs end in `-small`. The str ## IS_BOOL -A type checking and casting function for determining whether an expression has a Boolean value. +A type checking function for determining whether an property has a Boolean value. This function is often combined with other predicates if the program processing the query results requires a boolean value, and you want to filter out cases where the property is not a boolean. This function is often combined with other predicates if the program processing ### Arguments -`<expression>`, an expression to check whether it is a Boolean. +`<property>`, an property to check whether it is a Boolean. ### Returns -A Boolean value indicating if the type of the specified expression is a Boolean. +A Boolean value indicating if the type of the specified property is a Boolean. ### Example The following query builds on the above example to select the digital twins that ## IS_DEFINED -A type checking and casting function to check whether a property is defined. --This is only supported when the property value is a primitive type. Primitive types include string, Boolean, numeric, or `null`. `DateTime`, object types, and arrays are not supported. +A type checking function to determine whether a property is defined. ### Syntax This is only supported when the property value is a primitive type. Primitive ty ### Arguments -`<property>`, a property to determine whether it is defined. The property must be of a primitive type. +`<property>`, a property to determine whether it is defined. ### Returns The following query returns all digital twins who have a defined `Location` prop ## IS_NULL -A type checking and casting function for determining whether an expression's value is `null`. +A type checking function for determining whether an property's value is `null`. ### Syntax A type checking and casting function for determining whether an expression's val ### Arguments -`<expression>`, an expression to check whether it is null. +`<property>`, a property to check whether it is null. ### Returns -A Boolean value indicating if the type of the specified expression is `null`. +A Boolean value indicating if the type of the specified property is `null`. ### Example The following query returns twins who do not have a null value for Temperature. ## IS_NUMBER -A type checking and casting function for determining whether an expression has a number value. +A type checking function for determining whether a property has a number value. This function is often combined with other predicates if the program processing the query results requires a number value, and you want to filter out cases where the property is not a number. This function is often combined with other predicates if the program processing ### Arguments -`<expression>`, an expression to check whether it is a number. +`<property>`, a property to check whether it is a number. ### Returns -A Boolean value indicating if the type of the specified expression is a number. +A Boolean value indicating if the type of the specified property is a number. ### Example The following query selects the digital twins that have a numeric `Capacity` pro ## IS_OBJECT -A type checking and casting function for determining whether an expression's value is of a JSON object type. +A type checking function for determining whether a property's value is of a JSON object type. This function is often combined with other predicates if the program processing the query results requires a JSON object, and you want to filter out cases where the value is not a JSON object. This function is often combined with other predicates if the program processing ### Arguments -`<expression>`, an expression to check whether it is of an object type. +`<property>`, a property to check whether it is of an object type. ### Returns -A Boolean value indicating if the type of the specified expression is a JSON object. +A Boolean value indicating if the type of the specified property is a JSON object. ### Example The following query selects all of the digital twins where this is an object cal ## IS_OF_MODEL -A type checking and casting function to determine whether a twin is of a particular model type. Includes models that inherit from the specified model. +A type checking and function to determine whether a twin is of a particular model type. Includes models that inherit from the specified model. ### Syntax The following query returns twins from the DT collection that are exactly of the ## IS_PRIMITIVE -A type checking and casting function for determining whether an expression's value is of a primitive type (string, Boolean, numeric, or `null`). +A type checking function for determining whether a property's value is of a primitive type (string, Boolean, numeric, or `null`). This function is often combined with other predicates if the program processing the query results requires a primitive-typed value, and you want to filter out cases where the property is not primitive. This function is often combined with other predicates if the program processing ### Arguments -`<expression>`, an expression to check whether it is of a primitive type. +`<property>`, a property to check whether it is of a primitive type. ### Returns -A Boolean value indicating if the type of the specified expression is one of the primitive types (string, Boolean, numeric, or `null`). +A Boolean value indicating if the type of the specified property is one of the primitive types (string, Boolean, numeric, or `null`). ### Example The following query returns the `area` property of the Factory with the ID of 'A ## IS_STRING -A type checking and casting function for determining whether an expression has a string value. +A type checking function for determining whether a property has a string value. This function is often combined with other predicates if the program processing the query results requires a string value, and you want to filter out cases where the property is not a string. This function is often combined with other predicates if the program processing ### Arguments -`<expression>`, an expression to check whether it is a string. +`<property>`, a property to check whether it is a string. ### Returns |
dns | Private Dns Privatednszone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md | Title: What is an Azure DNS private zone description: Overview of a private DNS zone -+ Previously updated : 04/09/2021- Last updated : 08/15/2022+ # What is a private Azure DNS zone To understand how many private DNS zones you can create in a subscription and ho ## Restrictions -* Single-labeled private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels. +* Single-labeled private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example, contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels. * You can't create zone delegations (NS records) in a private DNS zone. If you intend to use a child domain, you can directly create the domain as a private DNS zone. Then you can link it to the virtual network without setting up a nameserver delegation from the parent zone.+* Starting the week of August 28th, 2022, specific reserved zone names will be blocked from creation to prevent disruption of services. The following zone names are blocked: ++ | Public | Azure Government | Azure China | + | | | | + |azure.com | azure.us | azure.cn + |microsoft.com | microsoft.us | microsoft.cn + |trafficmanager.net | usgovtrafficmanager.net | trafficmanager.cn + |cloudapp.net | usgovcloudapp.net | chinacloudapp.cn + |azclient.ms | azclient.us | azclient.cn + |windows.net| usgovcloudapi.net | chinacloudapi.cn + |msidentity.com | msidentity.us | msidentity.cn + |core.windows.net | core.usgovcloudapi.net | core.chinacloudapi.cn ## Next steps |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
event-grid | Publish Iot Hub Events To Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md | Title: Tutorial - Use IoT Hub events to trigger Azure Logic Apps description: This tutorial shows how to use the event routing service of Azure Event Grid, create automated processes to perform Azure Logic Apps actions based on IoT Hub events. -+ Last updated 09/14/2020-+ Next, create a logic app and add an HTTP event grid trigger that processes reque 1. In the [Azure portal](https://portal.azure.com), select **Create a resource**, then type "logic app" in the search box and select return. Select **Logic App** from the results. -  + :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/select-logic-app.png" alt-text="Screenshot of how to select the logic app from a list of resources." lightbox="./media/publish-iot-hub-events-to-logic-apps/select-logic-app.png"::: 1. On the next screen, select **Create**. -1. Give your logic app a name that's unique in your subscription, then select the same subscription, resource group, and location as your IoT hub. +1. Give your logic app a unique name in your subscription, then select the same subscription, resource group, and location as your IoT hub. Choose the **Consumption** plan type. -  + :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/create-logic-app-fields.png" alt-text="Screenshot of how to configure your logic app." lightbox="./media/publish-iot-hub-events-to-logic-apps/create-logic-app-fields.png"::: 1. Select **Review + create**. Next, create a logic app and add an HTTP event grid trigger that processes reque 1. In the Logic Apps Designer, page down to see **Templates**. Choose **Blank Logic App** so that you can build your logic app from scratch. + :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/logic-app-designer-template.png" alt-text="Screenshot of the Logic App Designer templates." lightbox="./media/publish-iot-hub-events-to-logic-apps/logic-app-designer-template.png"::: + ### Select a trigger A trigger is a specific event that starts your logic app. For this tutorial, the trigger that sets off the workflow is receiving a request over HTTP. A trigger is a specific event that starts your logic app. For this tutorial, the  +1. Copy the `json` below and replace the placeholder values `<>` with your own. + 1. Paste the *Device connected event schema* JSON into the text box, then select **Done**: ```json [{ "id": "f6bbf8f4-d365-520d-a878-17bf7238abd8",- "topic": "/SUBSCRIPTIONS/<subscription ID>/RESOURCEGROUPS/<resource group name>/PROVIDERS/MICROSOFT.DEVICES/IOTHUBS/<hub name>", + "topic": "/SUBSCRIPTIONS/<azure subscription ID>/RESOURCEGROUPS/<resource group name>/PROVIDERS/MICROSOFT.DEVICES/IOTHUBS/<hub name>", "subject": "devices/LogicAppTestDevice", "eventType": "Microsoft.Devices.DeviceConnected", "eventTime": "2018-06-02T19:17:44.4383997Z", A trigger is a specific event that starts your logic app. For this tutorial, the "sequenceNumber": "000000000000000001D4132452F67CE200000002000000000000000000000001" },- "hubName": "egtesthub1", + "hubName": "<hub name>", "deviceId": "LogicAppTestDevice", "moduleId" : "DeviceModuleID" }, Actions are any steps that occur after the trigger starts the logic app workflow Your email template may look like this example: -  + :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/email-content.png" alt-text="Screenshot of how to create an event email in the template." lightbox="./media/publish-iot-hub-events-to-logic-apps/email-content.png"::: 1. Select **Save** in the Logic Apps Designer. In this section, you configure your IoT Hub to publish events as they occur. When you're done, the pane should look like the following example: -  + :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/subscription-form.png" alt-text="Screenshot of your 'Create Event Subscription' page in the Azure portal." lightbox="./media/publish-iot-hub-events-to-logic-apps/subscription-form.png"::: 1. Select **Create**. Test your logic app by quickly simulating a device connection using the Azure CL az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName} ``` + This could take a minute. You'll see a `json` printout once it's created. + 1. Run the following command to simulate connecting your device to IoT Hub and sending telemetry: ```azurecli |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
frontdoor | Create Front Door Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md | |
governance | Get Compliance Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md | Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources.-+ Last updated 08/05/2022 -+ # Get compliance data of Azure resources |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
hdinsight | Enterprise Security Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enterprise-security-package.md | Title: Enterprise Security Package for Azure HDInsight description: Learn the Enterprise Security Package components and versions in Azure HDInsight. Previously updated : 05/08/2020 Last updated : 08/16/2022 # Enterprise Security Package for Azure HDInsight |
hdinsight | Apache Hadoop Use Hive Beeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-beeline.md | This example is based on using the Beeline client from [an SSH connection](../hd ``` > [!NOTE] > Refer to "To HDInsight Enterprise Security Package (ESP) cluster using Kerberos" part in [Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local](connect-install-beeline.md#to-hdinsight-enterprise-security-package-esp-cluster-using-kerberos) if you are using an Enterprise Security Package (ESP) enabled cluster- > - > Dropping an external table does **not** delete the data, only the table definition. -+ 3. Beeline commands begin with a `!` character, for example `!help` displays help. However the `!` can be omitted for some commands. For example, `help` also works. There's `!sql`, which is used to execute HiveQL statements. However, HiveQL is so commonly used that you can omit the preceding `!sql`. The following two statements are equivalent: |
hdinsight | Hdinsight Administer Use Command Line | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-command-line.md | description: Learn how to use the Azure CLI to manage Azure HDInsight clusters. Previously updated : 02/26/2020 Last updated : 06/16/2022 # Manage Azure HDInsight clusters using Azure CLI |
hdinsight | Hdinsight Create Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-create-virtual-network.md | description: Learn how to create an Azure Virtual Network to connect HDInsight t Previously updated : 05/12/2021 Last updated : 08/16/2022 # Create virtual networks for Azure HDInsight clusters |
hdinsight | Hdinsight Custom Ambari Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md | description: Learn how to create HDInsight clusters with your own custom Apache Previously updated : 01/12/2021 Last updated : 08/16/2022 # Set up HDInsight clusters with a custom Ambari DB |
hdinsight | Hdinsight Multiple Clusters Data Lake Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-multiple-clusters-data-lake-store.md | description: Learn how to use more than one HDInsight cluster with a single Data Previously updated : 12/18/2019 Last updated : 08/16/2022 # Use multiple HDInsight clusters with an Azure Data Lake Storage account Set read-execute permissions for **others** through the hierarchy, for example, ## See also - [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md)-- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)+- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md) |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
healthcare-apis | Configure Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md | -FHIR service supports the $export command that allows you to export the data out of the FHIR service account to a storage account. +The FHIR service supports the `$export` operation [specified by HL7](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account. -The three steps below are used in configuring export data in the FHIR service: +There are three steps in setting up the `$export` operation for the FHIR service: -- Enable managed identity for the FHIR service.-- Create an Azure storage account or use an existing storage account, and then grant permissions to the FHIR service to access them.-- Select the storage account in the FHIR service as the destination.+- Enable a managed identity for the FHIR service. +- Configure a new or existing Azure Data Lake Storage Gen2 (ADLS Gen2) account and give permission for the FHIR service to access the account. +- Set the ADLS Gen2 account as the export destination for the FHIR service. -## Enable managed identity on the FHIR service +## Enable managed identity for the FHIR service -The first step in configuring the FHIR service for export is to enable system wide managed identity on the service, which will be used to grant the service to access the storage account. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). +The first step in configuring your environment for FHIR data export is to enable a system-wide managed identity for the FHIR service. This managed identity is used to authenticate the FHIR service to allow access to the ADLS Gen2 account during an `$export` operation. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md). -In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then select **Save**. **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. Once the system identity has been enabled, you'll see a system assigned GUID value. +In this step, browse to your FHIR service in the Azure portal and select the **Identity** blade. Set the **Status** option to **On**, and then click **Save**. When the **Yes** and **No** buttons display, select **Yes** to enable the managed identity for the FHIR service. Once the system identity has been enabled, you'll see an **Object (principal) ID** value for your FHIR service. [](media/export-data/fhir-mi-enabled.png#lightbox) -## Assign permissions to the FHIR service to access the storage account +## Give permission in the storage account for FHIR service access -1. Select **Access control (IAM)**. +1. Go to your ADLS Gen2 storage account in the Azure portal. -1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task. +2. Select **Access control (IAM)**. ++3. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator for help with this step. :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open."::: -1. On the **Role** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role. +4. On the **Role** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role. [](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox) -1. On the **Members** tab, select **Managed identity**, and then select **Select members**. +5. On the **Members** tab, select **Managed identity**, and then click **Select members**. -1. Select your Azure subscription. +6. Select your Azure subscription. -1. Select **System-assigned managed identity**, and then select the FHIR service. +7. Select **System-assigned managed identity**, and then select the managed identity that you enabled earlier for your FHIR service. -1. On the **Review + assign** tab, select **Review + assign** to assign the role. +8. On the **Review + assign** tab, click **Review + assign** to assign the **Storage Blob Data Contributor** role to your FHIR service. For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md). -Now you're ready to select the storage account in the FHIR service as a default storage account for export. +Now you're ready to configure the FHIR service with the ADLS Gen2 account as the default storage account for export. -## Specify the export storage account for FHIR service +## Specify the storage account for FHIR service export -The final step is to assign the Azure storage account that the FHIR service will use to export the data to. +The final step is to specify the ADLS Gen2 account that the FHIR service will use when exporting data. > [!NOTE]-> If you haven't assigned storage access permissions to the FHIR service, the export operations ($export) will fail. +> In the storage account, if you haven't assigned the **Storage Blob Data Contributor** role to the FHIR service, the `$export` operation will fail. ++1. Go to your FHIR service settings. ++2. Select the **Export** blade. -To do this, select the **Export** blade in FHIR service and select the storage account. To search for the storage account, enter its name in the text field. You can also search for your storage account by using the available filters **Name**, **Resource group**, or **Region**. +3. Select the name of the storage account from the list. If you need to search for your storage account, use the **Name**, **Resource group**, or **Region** filters. [](media/export-data/fhir-export-storage.png#lightbox) -After you've completed this final step, you're ready to export the data using $export command. +After you've completed this final configuration step, you're ready to export data from the FHIR service. See [How to export FHIR data](./export-data.md) for details on performing `$export` operations with the FHIR service. > [!Note]-> Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations. +> Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations. -## Use Azure storage accounts behind firewalls +## Securing the FHIR service `$export` operation -FHIR service supports a secure export operation. Choose one of the two options below: +For securely exporting from the FHIR service to an ADLS Gen2 account, there are two main options: -* Allowing FHIR service as a Microsoft Trusted Service to access the Azure storage account. +* Allowing the FHIR service to access the storage account as a Microsoft Trusted Service. -* Allowing specific IP addresses associated with FHIR service to access the Azure storage account. -This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the FHIR service. +* Allowing specific IP addresses associated with the FHIR service to access the storage account. +This option permits two different configurations depending on whether or not the storage account is in the same Azure region as the FHIR service. ### Allowing FHIR service as a Microsoft Trusted Service -Select a storage account from the Azure portal, and then select the **Networking** blade. Select **Selected networks** under the **Firewalls and virtual networks** tab. +Go to your ADLS Gen2 account in the Azure portal and select the **Networking** blade. Select **Enabled from selected virtual networks and IP addresses** under the **Firewalls and virtual networks** tab. :::image type="content" source="media/export-data/storage-networking-1.png" alt-text="Screenshot of Azure Storage Networking Settings." lightbox="media/export-data/storage-networking-1.png"::: -Select **Microsoft.HealthcareApis/workspaces** from the **Resource type** dropdown list and your workspace from the **Instance name** dropdown list. +Select **Microsoft.HealthcareApis/workspaces** from the **Resource type** dropdown list and then select your workspace from the **Instance name** dropdown list. -Under the **Exceptions** section, select the box **Allow trusted Microsoft services to access this storage account** and save the setting. +Under the **Exceptions** section, select the box **Allow Azure services on the trusted services list to access this storage account**. Make sure to click **Save** to retain the settings. :::image type="content" source="media/export-data/exceptions.png" alt-text="Allow trusted Microsoft services to access this storage account."::: -Next, specify the FHIR service instance in the selected workspace instance for the storage account using the PowerShell command. +Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This will allow you to configure your Azure storage account(s) using PowerShell. -``` +```PowerShell +Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force +``` ++Now, use the PowerShell command below to set the selected FHIR service instance as a trusted resource for the storage account. Make sure that all listed parameters are defined in your PowerShell environment. ++Note that you'll need to run the `Add-AzStorageAccountNetworkRule` command as an administrator in your local environment. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md). ++```PowerShell $subscription="xxx" $tenantId = "xxx" $resourceGroupName = "xxx" $storageaccountName = "xxx" $workspacename="xxx" $fhirname="xxx"-$resourceId = "/subscriptions/$subscription/resourceGroups/$resourcegroup/providers/Microsoft.HealthcareApis/workspaces/$workspacename/fhirservices/$fhirname" +$resourceId = "/subscriptions/$subscription/resourceGroups/$resourceGroupName/providers/Microsoft.HealthcareApis/workspaces/$workspacename/fhirservices/$fhirname" Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $storageaccountName -TenantId $tenantId -ResourceId $resourceId ``` -You can see that the networking setting for the storage account shows **two selected** in the **Instance name** dropdown list. One is linked to the workspace instance and the second is linked to the FHIR service instance. +After running this command, in the **Firewall** section under **Resource instances** you will see **2 selected** in the **Instance name** dropdown list. These are the names of the workspace instance and FHIR service instance that you just registered as Microsoft Trusted Resources. :::image type="content" source="media/export-data/storage-networking-2.png" alt-text="Screenshot of Azure Storage Networking Settings with resource type and instance names." lightbox="media/export-data/storage-networking-2.png"::: -Note that you'll need to install "Add-AzStorageAccountNetworkRule" using an administrator account. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md) --` -Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force -` --You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and isn't publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account to access the data there if possible. --> [!IMPORTANT] -> The user interface will be updated later to allow you to select the Resource type for FHIR service and a specific service instance. +You're now ready to securely export FHIR data to the storage account. Note that the storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable private endpoints for the storage account. -### Allowing specific IP addresses for the Azure storage account in a different region +### Allowing specific IP addresses from other Azure regions to access the Azure storage account -Select **Networking** of the Azure storage account from the -portal. +In the Azure portal, go to the ADLS Gen2 account and select the **Networking** blade. -Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to -allow access from the internet or your on-premises networks. You can -find the IP address in the table below for the Azure region where the -FHIR service is provisioned. +Select **Enabled from selected virtual networks and IP addresses**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks. You can find the IP address in the table below for the Azure region where the FHIR service is provisioned. |**Azure Region** |**Public IP Address** | |:-|:-| FHIR service is provisioned. > [!NOTE] > The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](./convert-data.md#configure-acr-firewall). -### Allowing specific IP addresses for the Azure storage account in the same region +### Allowing specific IP addresses to access the Azure storage account in the same region -The configuration process is the same as above except a specific IP -address range in Classless Inter-Domain Routing (CIDR) format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request. +The configuration process for IP addresses in the same region is just like above except a specific IP address range in Classless Inter-Domain Routing (CIDR) format is used instead (i.e., 100.64.0.0/10). The reason why the IP address range (100.64.0.0 ΓÇô 100.127.255.255) must be specified is because an IP address for the FHIR service will be allocated each time an `$export` request is made. > [!Note] -> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region. +> It is possible that a private IP address within the range of 10.0.2.0/24 may be used, but there is no guarantee that the `$export` operation will succeed in such a case. You can retry if the `$export` request fails, but until an IP address within the range of 100.64.0.0/10 is used, the request will not succeed. This network behavior for IP address ranges is by design. The alternative is to configure the storage account in a different region. ## Next steps -In this article, you learned about the three steps in configuring export settings that allow you to export data out of FHIR service account to a storage account. For more information about the Bulk Export feature that allows data to be exported from the FHIR service, see +In this article, you learned about the three steps in configuring your environment to allow export of data from your FHIR service to an Azure storage account. For more information about Bulk Export capabilities in the FHIR service, see >[!div class="nextstepaction"] >[How to export FHIR data](export-data.md) |
healthcare-apis | Configure Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md | Copy the URL as request URL and do following changes of the JSON as body: - Set initialImportMode in importConfiguration to **true** - Drop off provisioningState. -[  ](media/bulk-import/importer-url-and-body.png#lightbox) +[  ](media/bulk-import/import-url-and-body.png#lightbox) After you've completed this final step, you're ready to import data using $import. +You can also use the **Deploy to Azure** button below to open custom Resource Manager template that updates the configuration for $import. ++ [](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fiotc-device-bridge%2Fmaster%2Fazuredeploy.json) + ## Next steps In this article, you've learned the FHIR service supports $import operation and how it allows you to import data into FHIR service account from a storage account. You also learned about the three steps used in configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see In this article, you've learned the FHIR service supports $import operation and >[!div class="nextstepaction"] >[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Convert Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md | The `$convert-data` custom endpoint in the FHIR service enables converting healt ## Using the `$convert-data` endpoint -The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. Calling the `$convert-data` endpoint causes the FHIR service to perform a conversion on health data sent in an API request: +The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. You can call the `$convert-data` endpoint as follows: `POST {{fhirurl}}/$convert-data` -The health data is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR. +The health data for conversion is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR. ### Parameters Resource |
healthcare-apis | De Identified Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md | Title: Exporting de-identified data for FHIR service + Title: Using the FHIR service to export de-identified data description: This article describes how to set up and use de-identified export Previously updated : 06/06/2022 Last updated : 08/15/2022 # Exporting de-identified data > [!Note] -> Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements. +> Results when using the FHIR service's de-identified export will vary based on the nature of the data being exported and what de-id functions are in use. Microsoft is unable to evaluate de-identified export outputs or determine the acceptability for customers' use cases and compliance needs. The FHIR service's de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements. -The $export command can also be used to export de-identified data from the FHIR server. It uses the anonymization engine from [FHIR tools for anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization), and takes anonymization config details in query parameters. You can create your own anonymization config file or use the [sample config file](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#sample-configuration-file) for HIPAA Safe Harbor method as a starting point. + The FHIR service is able to de-identify data on export when running an `$export` operation. For de-identified export, the FHIR service uses the anonymization engine from the [FHIR tools for anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization) (OSS) project on GitHub. There is a [sample config file](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#sample-configuration-file) to help you get started redacting/transforming FHIR data fields that contain personally identifying information. ## Configuration file -The anonymization engine comes with a sample configuration file to help meet the requirements of HIPAA Safe Harbor Method. The configuration file is a JSON file with four sections: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`. +The anonymization engine comes with a sample configuration file to help you get started with [HIPAA Safe Harbor Method](https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/https://docsupdatetracker.net/index.html#safeharborguidance) de-id requirements. The configuration file is a JSON file with four properties: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`. * `fhirVersion` specifies the FHIR version for the anonymization engine.-* `processingErrors` specifies what action to take for the processing errors that may arise during the anonymization. You can _raise_ or _keep_ the exceptions based on your needs. -* `fhirPathRules` specifies which anonymization method is to be used. The rules are executed in the order of appearance in the configuration file. -* `parameters` sets rules for the anonymization behaviors specified in _fhirPathRules_. +* `processingErrors` specifies what action to take for any processing errors that may arise during the anonymization. You can _raise_ or _keep_ the exceptions based on your needs. +* `fhirPathRules` specifies which anonymization method to use. The rules are executed in the order they appear in the configuration file. +* `parameters` sets additional controls for the anonymization behavior specified in _fhirPathRules_. -Here's a sample configuration file for R4: +Here's a sample configuration file for FHIR R4: ```json { Here's a sample configuration file for R4: } ``` -For more detailed information on each of these four sections of the configuration file, select [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format). -## Using $export command for the de-identified data - `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>` +For detailed information on the settings within the configuration file, visit [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format). ++## Using the `$export` endpoint for de-identifying data ++The API call below demonstrates how to form a request for de-id on export from the FHIR service. ++``` +GET https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>> +``` ++You will need to create a container for the de-identified export in your ADLS Gen2 account and specify the `<<container_name>>` in the API request as shown above. Additionally, you will need to place the JSON config file with the anonymization rules inside the container and specify the `<<config file name>>` in the API request (see above). ++> [!Note] +> It is common practice to name the container `anonymization`. The JSON file within the container is often named `anonymizationConfig.json`. > [!Note] -> Right now the FHIR service only supports de-identified export at the system level ($export). +> Right now the FHIR service only supports de-identified export at the system level (`$export`). |Query parameter | Example |Optionality| Description| |||--||-| _\_anonymizationConfig_ |DemoConfig.json|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named **anonymization** within the same Azure storage account that is configured as the export location. | -| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property| +| `anonymizationConfig` |`anonymizationConfig.json`|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named `anonymization` within the ADLS Gen2 account that is configured as the export location. | +| `anonymizationConfigEtag`|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property.| > [!IMPORTANT]-> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level. +> Both the raw export and de-identified export operations write to the same Azure storage account specified in the export configuration for the FHIR service. If you have need for multiple de-identification configurations, it is recommended that you create a different container for each configuration and manage user access at the container level. ## Next steps -In this article, you've learned how to set up and use de-identified export. For more information about how to export FHIR data, see +In this article, you've learned how to set up and use the de-identified export feature in the FHIR service. For more information about how to export FHIR data, see >[!div class="nextstepaction"] >[Export data](export-data.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md | -Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an ADLS Gen2 storage account. For configuring export settings and creating an ADLS Gen2 storage account, refer to the [Configure settings for export](./configure-export-data.md) page. +Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an Azure Data Lake Storage Gen2 (ADLS Gen2) account. For configuring export settings and creating an ADLS Gen2 account, refer to the [Configure settings for export](./configure-export-data.md) page. ## Calling the `$export` endpoint -After setting up the FHIR service to connect with an ADLS Gen2 storage account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`). Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request. +After setting up the FHIR service to connect with an ADLS Gen2 account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`) within the ADLS Gen2 account. Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request. ``` GET {{fhirurl}}/$export?_container={{containerName}} For general information about the FHIR `$export` API spec, please see the [HL7 F **Jobs stuck in a bad state** -In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 storage account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7. +In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7. > [!NOTE] > In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service will stop the operation and move to a new job. In addition to checking the presence of exported files in your storage account, ### Exporting FHIR data to ADLS Gen2 -Currently the FHIR service supports `$export` to ADLS Gen2 storage accounts, with the following limitations: +Currently the FHIR service supports `$export` to ADLS Gen2 accounts, with the following limitations: - ADLS Gen2 provides [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target `$export` operations to a specific subdirectory within a container. The FHIR service is only able to specify the destination container for the export (where a new folder for each `$export` operation is created). - Once an `$export` operation is complete and all data has been written inside a folder, the FHIR service doesn't export anything to that folder again since subsequent exports to the same container will be inside a newly created folder. The FHIR service supports the following query parameters for filtering exported | `_outputFormat` | Yes | Currently supports three values to align to the FHIR Spec: `application/fhir+ndjson`, `application/ndjson`, or just `ndjson`. All export jobs will return `.ndjson` files and the passed value has no effect on code behavior. | | `_since` | Yes | Allows you to only export resources that have been modified since the time provided. | | `_type` | Yes | Allows you to specify which types of resources will be included. For example, `_type=Patient` would return only patient resources.|-| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further restrict the results. | +| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further limit the results. | | `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an auto-generated name. | > [!Note]-> Only storage accounts in the same subscription as that for the FHIR service are allowed to be registered as the destination for `$export` operations. +> Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations. ## Next steps |
healthcare-apis | Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md | Last updated 06/06/2022 -# Bulk-import FHIR data (Preview) +# Bulk-import FHIR data The bulk-import feature enables importing Fast Healthcare Interoperability Resources (FHIR®) data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server. The bulk-import feature enables importing Fast Healthcare Interoperability Resou * Conditional references in resources aren't supported. * If multiple resources share the same resource ID, then only one of those resources will be imported at random and an error will be logged corresponding to the remaining resources sharing the ID. * The data to be imported must be in the same Tenant as that of the FHIR service.-* Maximum number of files to be imported per operation is 1,000. +* Maximum number of files to be imported per operation is 10,000. ## Using $import operation Below are some error codes you may encounter and the solutions to help you resol As illustrated in this article, $import is one way of doing bulk import. Another way is using an open-source solution, called [FHIR Bulk Loader](https://github.com/microsoft/fhir-loader). FHIR-Bulk Loader is an Azure Function App solution that provides the following capabilities for ingesting FHIR data: * Imports FHIR Bundles (compressed and non-compressed) and NDJSON files into a FHIR service-* High Speed Parallel Event Grid that triggers from storage accounts or other event grid resources +* High Speed Parallel Event Grid that triggers from storage accounts or other Event Grid resources * Complete Auditing, Error logging and Retry for throttled transactions ## Next steps In this article, you've learned about how the Bulk import feature enables import >[!div class="nextstepaction"] >[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md) -FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | How To Use Custom Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md | Title: How to use custom functions in the MedTech service - Azure Health Data Services + Title: How to use custom functions with the MedTech service device mapping - Azure Health Data Services description: This article describes how to use custom functions with MedTech service device mapping. Previously updated : 08/05/2022 Last updated : 08/16/2022 # How to use custom functions -Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes MedTech service-specific custom functions for use with the MedTech service device mapping during the device message normalization process. +Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-use-device-mappings.md) during the device message [normalization](iot-data-flow.md#normalize) process. > [!NOTE] > Many functions are available when using **JmesPath** as the expression language. >[!TIP] >-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service. +> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service. ## Function signature return_type function_name(type $argname) The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error will occur. > [!NOTE]+> > When math-related functions are done, the end result **must** be able to fit within a C# [long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result in unable to fit within a C# long value, then a mathematical error will occur. ## Exception handling |
healthcare-apis | Iot Connector Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-machine-learning.md | In this article, we'll explore using the MedTech service and Azure Machine Learn ## MedTech service and Azure Machine Learning Service reference architecture -MedTech service enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR®) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. +The MedTech service enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR®) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment. The four line colors show the different parts of the data journey. The four line colors show the different parts of the data journey. 1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.-4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by 1 MedTech service icon. +4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by the MedTech service icon. 5. Three parts to number 5: - a. MedTech service request Patient resource from FHIR service. - b. FHIR service sends Patient resource back to the MedTech service. - c. IoT Patient Observation is record in FHIR service. + a. The MedTech service requests Patient resource from the FHIR service. + b. The FHIR service sends Patient resource back to the MedTech service. + c. IoT Patient Observation is record in the FHIR service. **Machine Learning and AI Data Route ΓÇô Steps 6 through 11** -6. Normalized ungrouped data stream sent to Azure Function (ML Input). +6. Normalized ungrouped data stream sent to an Azure Function (ML Input). 7. Azure Function (ML Input) requests Patient resource to merge with IoMT payload. 8. IoMT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage. 9. PHI IoMT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows. In this article, you've learned about the MedTech service and Machine Learning s >[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md) -(FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Iot Connector Power Bi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md | In this article, we'll explore using the MedTech service and Microsoft Power Bus ## MedTech service and Power BI reference architecture -The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR®) data. +The reference architecture below shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR®) data. You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/iot-concepts/iot-connector-power-bi.png"::: -MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud. +The MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud. We do encourage the use of Azure IoT services to assist with device/gateway connectivity. In this article, you've learned about the MedTech service and Power BI integrati >[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md) -(FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Iot Connector Teams | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md | In this article, we'll explore using the MedTech service and Microsoft Teams for ## MedTech service and Teams notifications reference architecture -When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR®) service, and Teams, you can enable multiple care solutions. +When combining the MedTech service, a Fast Healthcare Interoperability Resources (FHIR®) service, and Teams, you can enable multiple care solutions. -Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App. +Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App. You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). In this article, you've learned about the MedTech service and Teams notification >[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md) -(FHIR®) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. +FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Iot Data Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-data-flow.md | -Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service. +Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through a FHIR destination mapping, and then saved or persisted on the FHIR service. This article goes into more depth about each step in the data flow. The next steps are [Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md) by using a device mapping (the normalization step) and a FHIR destination mapping (the transformation step). This next section of the article describes the stages that IoMT (Internet of Med Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). The Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed. > [!NOTE]+> > JSON is the only supported format at this time for device data. ## Normalize Group is the next stage where the normalized messages available from the previou Device identity and measurement type grouping enable use of [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. This type provides a concise way to represent a time-based series of measurements from a device in FHIR. And time period controls the latency at which Observation resources generated by the MedTech service are written to FHIR service. > [!NOTE]+> > The time period value is defaulted to 15 minutes and cannot be configured for preview. ## Transform In the Transform stage, grouped-normalized messages are processed through FHIR d At this point, [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the message. These resources are added as a reference to the Observation resource being created. > [!NOTE]+> > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients it is advised you create a virtual device resource that is specific to the patient and send virtual device identifier in the message payload. The virtual device can be linked to the actual device resource as a parent. If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Patient resources on the FHIR service. Once the Observation FHIR resource is generated in the Transform stage, the reso ## Next steps -To learn how to create Device and FHIR destination mappings, see +To learn how to create device and FHIR destination mappings, see > [!div class="nextstepaction"] > [Device mappings](how-to-use-device-mappings.md) |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md | Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 08/09/2022-+ |
iot-central | Howto Configure File Uploads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md | +To learn how to upload files by using the IoT Central REST API, see [How to use the IoT Central REST API to upload a file.](../core/howto-upload-file-rest-api.md) + ## Prerequisites You must be an administrator in your IoT Central application to configure file uploads. |
iot-central | Howto Control Devices With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to control devices by using the IoT Central UI, see [Use properties in an Azure IoT Central solution](../core/howto-use-properties.md) and [How to use commands in an Azure IoT Central solution()](../core/howto-use-commands.md) + ## Components and modules Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md). |
iot-central | Howto Create Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md | +To learn how to query devices by using the IoT Central REST API, see [How to use the IoT Central REST API to query devices.](../core/howto-query-with-rest-api.md) + ## Understand the data explorer UI The analytics user interface has three main components: |
iot-central | Howto Create And Manage Applications Csp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md | To create an Azure IoT Central application, select **Build** in the left menu. C  -## Pricing plan --You can only create applications that use a standard pricing plan as a CSP. To showcase Azure IoT Central to your customer, you can create an application that uses the free pricing plan separately. Learn more about the free and standard pricing plans on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/). --You can only create applications that use a standard pricing plan as a CSP. To showcase Azure IoT Central to your customer, you can create an application that uses the free pricing plan separately. Learn more about the free and standard pricing plans on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/). - ## Application name The name of your application is displayed on the **Application Manager** page and within each Azure IoT Central application. You can choose any name for your Azure IoT Central application. Choose a name that makes sense to you and to others in your organization. |
iot-central | Howto Create Iot Central Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md | Whichever approach you choose, the configuration options are the same, and the p [!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)] +To learn how to manage IoT Central application by using the IoT Central REST API, see [Use the REST API to create and manage IoT Central applications.](../core/howto-manage-iot-central-with-rest-api.md) + ## Options This section describes the available options when you create an IoT Central application. Depending on the method you choose, you might need to supply the options on a form or as command-line parameters: ### Pricing plans -The *free* plan lets you create an IoT Central application to try for seven days. The free plan: --- Doesn't require an Azure subscription.-- Can only be created and managed on the [Azure IoT Central](https://aka.ms/iotcentral) site.-- Lets you connect up to five devices.-- Can be upgraded to a standard plan if you want to keep your application.- The *standard* plans: -- Do require an Azure subscription. You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).+- You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md). - Let you create and manage IoT Central applications using any of the available methods. - Let you connect as many devices as you need. You're billed by device. To learn more, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/).-- Cannot be downgraded to a free plan, but can be upgraded or downgraded to other standard plans.+- Can be upgraded or downgraded to other standard plans. The following table summarizes the differences between the three standard plans: The **My apps** page lists all the IoT Central applications you have access to. ## Copy an application -You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for. You can't create an application that uses the free pricing plan by copying an application. +You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for. Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md). |
iot-central | Howto Create Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md | The following screenshot shows an organization hierarchy definition in IoT Centr :::image type="content" source="media/howto-create-organization/organizations-definition.png" alt-text="Screenshot of organizations hierarchy definition." lightbox="media/howto-create-organization/organizations-definition.png"::: +To learn how to manage organizations by using the IoT Central REST API, see [How to use the IoT Central REST API to manage organizations.](../core/howto-manage-organizations-with-rest-api.md) + ## Create a hierarchy To start using organizations, you need to define your organization hierarchy. Each organization in the hierarchy acts as a logical container where you place devices, save dashboards and device groups, and invite users. To create your organizations, go to the **Permissions** section in your IoT Central application, select the **Organizations** tab, and select either **+ New** or use the context menu for an existing organization. To create one or many organizations at a time, select **+ Add another organization**: |
iot-central | Howto Edit Device Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md | To help you avoid any unintended consequences from editing a device template, th To learn more about device templates and how to create one, see [What are device templates?](concepts-device-templates.md) and [Set up a device template](howto-set-up-template.md). +To learn how to manage device templates by using the IoT Central REST API, see [How to use the IoT Central REST API to manage device templates.](../core/howto-manage-device-templates-with-rest-api.md) + ## Modify a device template Additive changes, such as adding a capability or interface to a model are non-breaking changes. You can make additive changes to a model at any stage of the development life cycle. |
iot-central | Howto Export Data Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md | Now that you have a destination to export data to, follow these steps to set up 4. Enter a name for the export. In the drop-down list box, select your **namespace**, or **Enter a connection string**. - - You only see storage accounts, Event Hubs namespaces, and Service Bus namespaces in the same subscription as your IoT Central application. If you want to export to a destination outside of this subscription, choose **Enter a connection string** and see step 6. - - For apps created using the free pricing plan, the only way to configure data export is through a connection string. Apps on the free pricing plan don't have an associated Azure subscription. + > [!Tip] + > You only see storage accounts, Event Hubs namespaces, and Service Bus namespaces in the same subscription as your IoT Central application. If you want to export to a destination outside of this subscription, choose **Enter a connection string** and see step 6.  |
iot-central | Howto Export To Blob Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md | This article describes how to configure data export to send data to the Blob St [!INCLUDE [iot-central-data-export](../../../includes/iot-central-data-export.md)] +To learn how to manage data export by using the IoT Central REST API, see [How to use the IoT Central REST API to manage data exports.](../core/howto-manage-data-export-with-rest-api.md) + ## Set up a Blob Storage export destination |
iot-central | Howto Manage Data Export With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage data export by using the IoT Central UI, see [Export IoT data to Blob Storage.](../core/howto-export-to-blob-storage.md) + ## Data export You can use the IoT Central data export feature to stream telemetry, property changes, device connectivity events, device lifecycle events, and device template lifecycle events to destinations such as Azure Event Hubs, Azure Service Bus, Azure Blob Storage, and webhook endpoints. |
iot-central | Howto Manage Device Templates With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage device templates by using the IoT Central UI, see [How to set up device templates](../core/howto-set-up-template.md) and [How to edit device templates](../core/howto-edit-device-template.md) + ## Device templates A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views. |
iot-central | Howto Manage Devices In Bulk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md | +To learn how to manage jobs by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices.](../core/howto-manage-jobs-with-rest-api.md) + ## Create and run a job The following example shows you how to create and run a job to set the light threshold for a group of devices. You use the job wizard to create and run jobs. You can save a job to run later. |
iot-central | Howto Manage Devices Individually | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md | This article describes how you manage devices in your Azure IoT Central applicat To learn how to manage custom groups of devices, see [Tutorial: Use device groups to analyze device telemetry](tutorial-use-device-groups.md). +To learn how to manage devices by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices.](../core/howto-manage-devices-with-rest-api.md) + ## View your devices To view an individual device: |
iot-central | Howto Manage Devices With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage devices by using the IoT Central UI, see [Manage individual devices in your Azure IoT Central application.](../core/howto-manage-devices-individually.md) + ## Devices REST API The IoT Central REST API lets you: |
iot-central | Howto Manage Iot Central With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-with-rest-api.md | To use this API, you need a bearer token for the `management.azure.com` resource az account get-access-token --resource https://management.azure.com ``` +To learn how to manage IoT Central application by using the IoT Central UI, see [Create an IoT Central application.](../core/howto-create-iot-central-application.md) + ## List your applications To get a list of the IoT Central applications in a subscription: |
iot-central | Howto Manage Jobs With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md | To learn how to create and manage jobs in the UI, see [Manage devices in bulk in [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage jobs by using the IoT Central UI, see [Manage devices in bulk in your Azure IoT Central application.](../core/howto-manage-devices-in-bulk.md) + ## Job payloads Many of the APIs described in this article include a definition that looks like the following JSON snippet: |
iot-central | Howto Manage Organizations With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md | To learn more about organizations in IoT Central Application, see [Manage IoT Ce [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage organizations by using the IoT Central UI, see [Manage IoT Central organizations.](../core/howto-create-organizations.md) + ## Organizations REST API The IoT Central REST API lets you: |
iot-central | Howto Manage Users Roles With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to manage users and roles by using the IoT Central UI, see [Manage users and roles in your IoT Central application.](../core/howto-manage-users-roles.md) + ## Manage roles The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application: |
iot-central | Howto Manage Users Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md | This article describes how you can add, edit, and delete users in your Azure IoT To access and use the **Permissions** section, you must be in the **App Administrator** role for an Azure IoT Central application or in a custom role that includes administration permissions. If you create an Azure IoT Central application, you're automatically added to the **App Administrator** role for that application. +To learn how to manage users and roles by using the IoT Central REST API, see [How to use the IoT Central REST API to manage users and roles.](../core/howto-manage-users-roles-with-rest-api.md) + ## Add users Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft user accounts, Azure Active Directory accounts, and Azure Active Directory service principals. IoT Central doesn't currently support Azure Active Directory groups. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md). |
iot-central | Howto Query With Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to query devices by using the IoT Central UI, see [How to use data explorer to analyze device data.](../core/howto-create-analytics.md) + ## Run a query Use the following request to run a query: |
iot-central | Howto Set Up Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md | The device template has the following sections: To learn more, see [What are device templates?](concepts-device-templates.md). +To learn how to manage device templates by using the IoT Central REST API, see [How to use the IoT Central REST API to manage device templates.](../core/howto-manage-device-templates-with-rest-api.md) + ## Create a device template You have several options to create device templates: |
iot-central | Howto Upload File Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md | For the reference documentation for the IoT Central REST API, see [Azure IoT Cen [!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)] +To learn how to upload files by using the IoT Central UI, see [How to configure file uploads.](../core/howto-configure-file-uploads.md) + ## Prerequisites To test the file upload, install the following prerequisites in your local development environment: |
iot-central | Howto Use Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md | A device can: By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article. +To learn how to manage commands by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md) + ## Define your commands Standard commands are sent to a device to instruct the device to do something. A command can include parameters with additional information. For example, a command to open a valve on a device could have a parameter that specifies how much to open the valve. Commands can also receive a return value when the device completes the command. For example, a command that asks a device to run some diagnostics could receive a diagnostics report as a return value. |
iot-central | Howto Use Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md | Properties represent point-in-time values. For example, a device can use a prope You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article. +To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md) + ## Define your properties Properties are data fields that represent the state of your device. Use properties to represent the durable state of the device, such as the on/off state of a device. Properties can also represent basic device properties, such as the software version of the device. You declare properties as read-only or writable. |
iot-central | Overview Iot Central Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md | An administrator can configure the behavior and appearance of an IoT Central app - [Change application name and URL](howto-administer.md#change-application-name-and-url) - [Customize application UI](howto-customize-ui.md)-- [Move an application to a different pricing plans](howto-faq.yml#how-do-i-move-from-a-free-to-a-standard-pricing-plan-) ## Configure device file upload |
iot-central | Overview Iot Central | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md | IoT Central applications are fully hosted by Microsoft, which reduces the admini ## Pricing -You can create IoT Central application using a 7-day free trial, or use a standard pricing plan. --- Applications you create using the *free* plan are free for seven days and support up to five devices. You can convert them to use a standard pricing plan at any time before they expire.-- Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).+Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing). ## User roles |
iot-central | Tutorial Smart Meter App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md | The IoT Central platform provides two extensibility options: Continuous Data Exp In this tutorial, you learn how to: -- Create the Smart Meter App for free+- Create the smart meter app - Application walk-through - Clean up resources ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a smart meter monitoring application |
iot-central | Tutorial Solar Panel App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md | The IoT Central platform provides two extensibility options: Continuous Data Exp In this tutorial, you learn how to: > [!div class="checklist"]-> * Create a solar panel app for free +> * Create a solar panel app > * Walk through the application > * Clean up resources ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create a solar panel monitoring application |
iot-central | Tutorial Connected Waste Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md | In this tutorial, you learn how to: ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create connected waste management application |
iot-central | Tutorial Water Consumption Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md | In this tutorial, you learn how to: ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create water consumption monitoring application |
iot-central | Tutorial Water Quality Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md | In this tutorial, you learn to: ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create water quality monitoring application |
iot-central | Tutorial Continuous Patient Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md | In this tutorial, you learn how to: ## Prerequisites -- There are no specific prerequisites required to deploy this app.-- You can use the free pricing plan or use an Azure subscription.+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create application |
iot-central | Tutorial In Store Analytics Create App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md | In this tutorial, you learn how to: ## Prerequisites -- There are no specific prerequisites required to deploy this app.-- You can use the free pricing plan or use an Azure subscription.+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create in-store analytics application |
iot-central | Tutorial Iot Central Connected Logistics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md | In this tutorial, you learn how to: ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create connected logistics application Create the application using following steps: * **Application name**: you can use default suggested name or enter your friendly application name.- * **URL**: you can use suggested default URL or enter your friendly unique memorable URL. Next, the default setting is recommended if you already have an Azure Subscription. You can start with 7-day free trial pricing plan and choose to convert to a standard pricing plan at any time before the free trail expires. + * **URL**: you can use suggested default URL or enter your friendly unique memorable URL. * **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources. * **Create**: Select create at the bottom of the page to deploy your application. |
iot-central | Tutorial Iot Central Digital Distribution Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md | In this tutorial, you learn how to, ## Prerequisites -* No specific pre-requisites required to deploy this app -* Recommended to have Azure subscription, but you can even try without it +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create digital distribution center application template |
iot-central | Tutorial Iot Central Smart Inventory Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md | In this tutorial, you learn how to, ## Prerequisites -* No specific pre-requisites required to deploy this app. -* Recommended to have Azure subscription, but you can even try without it. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create smart inventory management application |
iot-central | Tutorial Micro Fulfillment Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md | In this tutorial, you learn: ## Prerequisites -* There are no specific prerequisites required to deploy this app. -* You can use the free pricing plan or use an Azure subscription. +An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create micro-fulfillment application |
iot-develop | Quickstart Devkit Espressif Esp32 Freertos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos.md | Hardware: - ESPRESSIF [ESP32-Azure IoT Kit](https://www.espressif.com/products/devkits/esp32-azure-kit/overview) - USB 2.0 A male to Micro USB male cable - Wi-Fi 2.4 GHz+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Microchip Atsame54 Xpro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md | You'll complete the following tasks: * Ethernet cable * Optional: [Weather Click](https://www.mikroe.com/weather-click) sensor. You can add this sensor to the device to monitor weather conditions. If you don't have this sensor, you can still complete this quickstart. * Optional: [mikroBUS Xplained Pro](https://www.microchip.com/Developmenttools/ProductDetails/ATMBUSADAPTER-XPRO) adapter. Use this adapter to attach the Weather Click sensor to the Microchip E54. If you don't have the sensor and this adapter, you can still complete this quickstart.+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Mxchip Az3166 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166.md | You'll complete the following tasks: * The [MXCHIP AZ3166 IoT DevKit](https://www.seeedstudio.com/AZ3166-IOT-Developer-Kit.html) (MXCHIP DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Nxp Mimxrt1060 Evk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md | You'll complete the following tasks: * USB 2.0 A male to Micro USB male cable * Wired Ethernet access * Ethernet cable+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Renesas Rx65n 2Mb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md | You will complete the following tasks: * The included 5V power supply * Ethernet cable * Wired Ethernet access+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Renesas Rx65n Cloud Kit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md | You'll complete the following tasks: * The [Renesas RX65N Cloud Kit](https://www.renesas.com/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-cloud-kit-renesas-rx65n-cloud-kit) (Renesas RX65N) * two USB 2.0 A male to Mini USB male cables * WiFi 2.4 GHz+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Stm B L475e Freertos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-freertos.md | Hardware: - STM [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) devkit - USB 2.0 A male to Micro USB male cable - Wi-Fi 2.4 GHz+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Stm B L475e | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md | You will complete the following tasks: * The [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) (STM DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-develop | Quickstart Devkit Stm B L4s5i | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md | You'll complete the following tasks: * Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands :::zone pivot="iot-toolset-cmake"+ ## Prerequisites * A PC running Windows 10 You'll complete the following tasks: * The [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prepare the development environment |
iot-hub-device-update | Device Update Configure Repo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md | + + Title: 'Configure package repository for package updates | Microsoft Docs' +description: Follow an example to configure package repository for package updates. ++ Last updated : 8/8/2022++++# Introduction to configuring package repository ++This article describes how to configure or modify the source package repository used with [Package updates](device-update-ubuntu-agent.md). ++Such as: +- You need to deliver over-the-air updates to your devices from a private package repository with approved versions of libraries and components +- You need devices to get packages from a specific vendor's repository ++Following this document, learn how to configure a package repository using [OSConfig for IoT](https://docs.microsoft.com/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images. ++## Prerequisites ++You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: +- Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md). +- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](https://docs.microsoft.com/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device). +- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent#manually-prepare-a-device.md). +- Install the OSConfig agent on the device. See [how to](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom). +- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure) ++## How to configure package repository for package updates +Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. ++1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](https://docs.microsoft.com/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device. +2. Upload your packages to the above configured repository. +3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. +4. Follow steps from [here](device-update-ubuntu-agent#prerequisites.md) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale. +5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent#monitor-the-update-deployment.md). |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
key-vault | Overview Vnet Service Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md | Here's a list of trusted services that are allowed to access a key vault if the |Trusted service|Supported usage scenarios| | | |-|Azure Virtual Machines deployment service|[Deploy certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault).| -|Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).| -|Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).| -|Azure Backup|Allow backup and restore of relevant keys and secrets during Azure Virtual Machines backup, by using [Azure Backup](../../backup/backup-overview.md).| -|Exchange Online & SharePoint Online|Allow access to customer key for Azure Storage Service Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview).| -|Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| -|Azure App Service|App Service is trusted only for [Deploying Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html), for individual app itself, the outbound IPs can be added in Key Vault's IP-based rules| -|Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).| +| Azure API Management|[Deploy certificates for Custom Domain from Key Vault using MSI](../../api-management/api-management-howto-use-managed-service-identity.md#use-ssl-tls-certificate-from-azure-key-vault)| +| Azure App Service|App Service is trusted only for [Deploying Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html), for individual app itself, the outbound IPs can be added in Key Vault's IP-based rules| +| Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md) +| Azure Backup|Allow backup and restore of relevant keys and secrets during Azure Virtual Machines backup, by using [Azure Backup](../../backup/backup-overview.md).| +| Azure CDN | [Configure HTTPS on an Azure CDN custom domain: Grant Azure CDN access to your key vault](../../cdn/cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#grant-azure-cdn-access-to-your-key-vault)| +| Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md) +| Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)| +| Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| | Azure Database for MySQL | [Data encryption for Azure Database for MySQL](../../mysql/howto-data-encryption-cli.md) | | Azure Database for PostgreSQL Single server | [Data encryption for Azure Database for PostgreSQL Single server](../../postgresql/howto-data-encryption-cli.md) |-|Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| -|Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.| -|Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)| -|Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)| -|Azure API Management|[Deploy certificates for Custom Domain from Key Vault using MSI](../../api-management/api-management-howto-use-managed-service-identity.md#use-ssl-tls-certificate-from-azure-key-vault)| -|Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)| -|Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)| -|Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)| -|Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) -|Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md) -|Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md) -|Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate) -|Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate) -|Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) -|Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)| +| Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)| +| Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).| +| Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)| +| Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate) +| Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate) +| Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) +| Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| +| Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)| +| Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).| +| Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)| +| Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).| +| Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).| +| Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)| +| Azure Virtual Machines deployment service|[Deploy certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault).| +| Exchange Online & SharePoint Online|Allow access to customer key for Azure Storage Service Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview).| +| Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md) > [!NOTE] > You must set up the relevant Key Vault access policies to allow the corresponding services to get access to Key Vault. |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
load-balancer | Load Balancer Standard Virtual Machine Scale Sets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md | When you use the virtual machine scale set in the back-end pool of the load bala ## Virtual Machine Scale Set Instance-level IPs -When virtual machine scale sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant). +When virtual machine scale sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). ## Outbound rules |
logic-apps | Create Single Tenant Workflows Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md | As you progress, you'll complete these high-level tasks: * To deploy your **Logic App (Standard)** resource to an [App Service Environment v3 (ASEv3)](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md). +## Best practices and recommendations ++For optimal designer responsiveness and performance, review and follow these guidelines: ++- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance. ++- Consider splitting business logic into multiple workflows where necessary. ++- Have no more than 10-15 workflows per logic app resource. + <a name="create-logic-app-resource"></a> ## Create a Standard logic app resource |
logic-apps | Logic Apps Using Sap Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md | An ISE provides access to resources that are protected by an Azure virtual netwo The following list describes the prerequisites for the SAP client library that you're using with the connector: -* Make sure that you install the latest version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.24.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). Earlier versions of SAP NCo might experience the following issues: +* Make sure that you install the latest version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). Earlier versions of SAP NCo might experience the following issues: * When more than one IDoc message is sent at the same time, this condition blocks all later messages that are sent to the SAP destination, causing messages to time out. The following list describes the prerequisites for the SAP client library that y * The on-premises data gateway (June 2021 release) depends on the `SAP.Middleware.Connector.RfcConfigParameters.Dispose()` method in SAP NCo to free up resources. + * After you upgrade the SAP server environment, you get the following exception message: 'The only destination <some-GUID> available failed when retrieving metadata from <SAP-system-ID> -- see log for details'. + * You must have the 64-bit version of the SAP client library installed, because the data gateway only runs on 64-bit systems. Installing the unsupported 32-bit version results in a "bad image" error. * From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows: The following example is an RFC call with a table parameter. This example call a <STFC_WRITE_TO_TCPIC xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <RESTART_QNAME>exampleQName</RESTART_QNAME> <TCPICDAT>- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> + <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/"> <LINE>exampleFieldInput1</LINE> </ABAPTEXT>- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> + <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/"> <LINE>exampleFieldInput2</LINE> </ABAPTEXT>- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> + <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/"> <LINE>exampleFieldInput3</LINE> </ABAPTEXT> </TCPICDAT> |
logic-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md | Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 ms.suite: integration |
logic-apps | Update Consumption Workflow Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/update-consumption-workflow-schema.md | + + Title: Update Consumption workflows to latest workflow schema +description: Update Consumption logic app workflows to the latest Workflow Definition Language schema in Azure Logic Apps. ++ms.suite: integration ++ Last updated : 08/15/2022+++# Update Consumption logic app workflows to latest Workflow Definition Language schema version in Azure Logic Apps ++If you have a Consumption logic app workflow that uses an older Workflow Definition Language schema, you can update your workflow to use the newest schema. This capability applies only to Consumption logic app workflows. ++## Best practices ++The following list includes some best practices for updating your logic app workflows to the latest schema: ++* Don't overwrite your original workflow until after you finish your testing and confirm that your updated workflow works as expected. ++* Copy the updated script to a new logic app workflow. ++* Test your workflow *before* you deploy to production. ++* After you finish and confirm a successful migration, update your logic app workflows to use the latest [managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) where possible. For example, replace older versions of the Dropbox connector with the latest version. ++## Update workflow schema ++When you select the option to update the schema, Azure Logic Apps automatically runs the migration steps and provides the code output for you. You can use this output to update your workflow definition. However, before you update your workflow definition using this output, make sure that you review and follow the best practices as described in the [Best practices](#best-practices) section. ++1. In the [Azure portal](https://portal.azure.com), open your logic app resource. ++1. On your logic app's navigation menu, select **Overview**. On the toolbar, select **Update Schema**. ++ > [!NOTE] + > + > If the **Update Schema** command is unavailable, your workflow already uses the current schema. ++  ++ The **Update Schema** pane opens to show a link to a document that describes the improvements in the new schema. ++## Next steps ++* [Review Workflow Definition Language schema updates - June 1, 2016](../logic-apps/logic-apps-schema-2016-04-01.md) |
machine-learning | Concept Compute Target | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md | When performing inference, Azure Machine Learning creates a Docker container tha [!INCLUDE [aml-deploy-target](../../includes/aml-compute-target-deploy.md)] -Learn [where and how to deploy your model to a compute target](how-to-deploy-and-where.md). +Learn [where and how to deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md). <a name="amlcompute"></a> ## Azure Machine Learning compute (managed) For more information, see [set up compute targets for model training and deploym Learn how to: * [Use a compute target to train your model](how-to-set-up-training-targets.md)-* [Deploy your model to a compute target](how-to-deploy-and-where.md) +* [Deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md) |
machine-learning | Concept Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md | You can use the following options for input data when invoking a batch endpoint: > [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). +> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options). |
machine-learning | Concept Enterprise Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md | You can also configure managed identities for use with Azure Machine Learning co > [!TIP] > There are some exceptions to the use of Azure AD and Azure RBAC within Azure Machine Learning: > * You can optionally enable __SSH__ access to compute resources such as Azure Machine Learning compute instance and compute cluster. SSH access is based on public/private key pairs, not Azure AD. SSH access is not governed by Azure RBAC.-> * You can authenticate to models deployed as web services (inference endpoints) using __key__ or __token__-based authentication. Keys are static strings, while tokens are retrieved using an Azure AD security object. For more information, see [Configure authentication for models deployed as a web service](how-to-authenticate-web-service.md). +> * You can authenticate to models deployed as online endpoints using __key__ or __token__-based authentication. Keys are static strings, while tokens are retrieved using an Azure AD security object. For more information, see [How to authenticate online endpoints](how-to-authenticate-online-endpoint.md). For more information, see the following articles: * [Authentication for Azure Machine Learning workspace](how-to-setup-authentication.md) When deploying models as web services, you can enable transport-layer security ( * [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) * [Secure Azure Machine Learning web services with TLS](./v1/how-to-secure-web-service.md)-* [Consume a Machine Learning model deployed as a web service](how-to-consume-web-service.md) * [Use Azure Machine Learning with Azure Firewall](how-to-access-azureml-behind-firewall.md) * [Use Azure Machine Learning with Azure Virtual Network](how-to-network-security-overview.md) * [Data encryption at rest and in transit](concept-data-encryption.md) |
machine-learning | Concept Model Management And Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md | Before you deploy a model into production, it's packaged into a Docker image. In If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging. -For more information, see [Deploy models](how-to-deploy-and-where.md#registermodel) and [Troubleshooting deployments](how-to-troubleshoot-deployment.md). +For more information, see [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md). ### Convert and optimize models To deploy the model to an endpoint, you must provide the following items: * Dependencies required to use the model. Examples are a script that accepts requests and invokes the model and conda dependencies. * Deployment configuration that describes how and where to deploy the model. -For more information, see [Deploy models](how-to-deploy-and-where.md). +For more information, see [Deploy online endpoints](how-to-deploy-managed-online-endpoints.md). #### Controlled rollout Monitoring enables you to understand what data is being sent to your model, and This information helps you understand how your model is being used. The collected input data might also be useful in training future versions of the model. -For more information, see [Enable model data collection](how-to-enable-data-collection.md). +For more information, see [Enable model data collection](v1/how-to-enable-data-collection.md). ## Retrain your model on new data You can also use Azure Data Factory to create a data ingestion pipeline that pre Learn more by reading and exploring the following resources: + [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)-+ [How and where to deploy models](how-to-deploy-and-where.md) with Machine Learning ++ [How to deploy a model to an online endpoint](how-to-deploy-managed-online-endpoints.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) + [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps) + [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)-+ Create clients that [consume a deployed model](how-to-consume-web-service.md) + [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale) + [Azure AI reference architectures and best practices repo](https://github.com/microsoft/AI) |
machine-learning | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md | Azure portal users will always find the latest image available for provisioning See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds. +## August 16, 2022 +[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) ++Version `22.08.11` ++Main changes: ++- Jupyterlab upgraded to version `3.4.5` +- `matplotlib`, `azureml-mlflow` added to `sdkv2` environment. +- Jupyterhub spawner reconfigured to root environment. + ## July 28, 2022 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview) |
machine-learning | How To Add Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md | To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri 1. Open your workspace in [Azure Machine Learning studio](https://ml.azure.com) 1. Open the menu on the top right and select **View all properties in Azure Portal**. You'll use Azure portal for all the rest of the steps in this article.-1. Select the **Subscription** link in the middle of the page. +1. Select the **Resource group** link in the middle of the page. 1. On the left, select **Access control (IAM)**. 1. At the top, select **+ Add > Add custom role**.-1. For the **Custom role name**, type **Labeler**. -1. In the **Description** box, add **Labeler access for data labeling projects**. +1. For the **Custom role name**, type the name you want to use. For example, **Labeler**. +1. In the **Description** box, add a description. For example, **Labeler access for data labeling projects**. 1. Select **Start from JSON**. 1. At the bottom of the page, select **Next**. 1. Don't do anything for the **Permissions** tab, you'll add permissions in a later step. Select **Next**. To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri :::image type="content" source="media/how-to-add-users/replace-lines.png" alt-text="Create custom role: select lines to replace them in the editor."::: -1. Replace these two lines with: - - ```json - "actions": [ - "Microsoft.MachineLearningServices/workspaces/read", - "Microsoft.MachineLearningServices/workspaces/labeling/projects/read", - "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read", - "Microsoft.MachineLearningServices/workspaces/labeling/labels/read", - "Microsoft.MachineLearningServices/workspaces/labeling/labels/write" - ], - "notActions": [ - ], - ``` +1. Replace these two lines with the `Actions` and `NotActions` from the appropriate role listed at [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md#data-labeling). Make sure to copy from `Actions` through the closing bracket, `],` 1. Select **Save** at the top of the edit box to save your changes. To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri 1. Select **Create** to create the custom role. 1. Select **OK**. -### Labeling team lead --You may want to create a second role for a labeling team lead. A labeling team lead can reject the labeled dataset and view labeling insights. In addition, this role also allows you to perform the role of a labeler. --To add this custom role, repeat the above steps. Use the name **Labeling Team Lead** and replace the two lines with: --```json - "actions": [ - "Microsoft.MachineLearningServices/workspaces/read", - "Microsoft.MachineLearningServices/workspaces/labeling/labels/read", - "Microsoft.MachineLearningServices/workspaces/labeling/labels/write", - "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action", - "Microsoft.MachineLearningServices/workspaces/labeling/projects/read", - "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read" - ], - "notActions": [ - "Microsoft.MachineLearningServices/workspaces/labeling/projects/write", - "Microsoft.MachineLearningServices/workspaces/labeling/projects/delete", - "Microsoft.MachineLearningServices/workspaces/labeling/export/action" - ], -``` ## Add guest user |
machine-learning | How To Create Attach Compute Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md | In this article, learn how to create and manage compute targets in Azure Machine ## What's a compute target? -With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md). +With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-managed-online-endpoints.md). ## <a id="portal-view"></a>View compute targets myvm = ComputeTarget(workspace=ws, name='my-vm-name') * Use the compute resource to [submit a training run](how-to-set-up-training-targets.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models.-* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md). +* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md). * [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md) |
machine-learning | How To Debug Visual Studio Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md | Use the Azure Machine Learning extension to validate, run, and debug your machin 1. Expand your workspace node. 1. Right-click the **Experiments** node and select **Create experiment**. When the prompt appears, provide a name for your experiment. 1. Expand the **Experiments** node, right-click the experiment you want to run and select **Run Experiment**.-1. From the list of options to run your experiment, select **Locally**. -1. **First time use on Windows only**. When prompted to allow File Share, select **Yes**. When you enable file share it allows Docker to mount the directory containing your script to the container. Additionally, it also allows Docker to store the logs and outputs from your run in a temporary directory on your system. +1. From the list of options, select **Locally**. +1. **First time use on Windows only**. When prompted to allow File Share, select **Yes**. When you enable file share, it allows Docker to mount the directory containing your script to the container. Additionally, it also allows Docker to store the logs and outputs from your run in a temporary directory on your system. 1. Select **Yes** to debug your experiment. Otherwise, select **No**. Selecting no will run your experiment locally without attaching to the debugger. 1. Select **Create new Run Configuration** to create your run configuration. The run configuration defines the script you want to run, dependencies, and datasets used. Alternatively, if you already have one, select it from the dropdown. 1. Choose your environment. You can choose from any of the [Azure Machine Learning curated](resource-curated-environments.md) or create your own. For more information on using an Azure Virtual Network with Azure Machine Learni Your ML pipeline steps run Python scripts. These scripts are modified to perform the following actions: -1. Log the IP address of the host that they are running on. You use the IP address to connect the debugger to the script. +1. Log the IP address of the host that they're running on. You use the IP address to connect the debugger to the script. 2. Start the debugpy debug component, and wait for a debugger to connect. if not (args.output_train is None): ### Configure ML pipeline To provide the Python packages needed to start debugpy and get the run context, create an environment-and set `pip_packages=['debugpy', 'azureml-sdk==<SDK-VERSION>']`. Change the SDK version to match the one you are using. The following code snippet demonstrates how to create an environment: +and set `pip_packages=['debugpy', 'azureml-sdk==<SDK-VERSION>']`. Change the SDK version to match the one you're using. The following code snippet demonstrates how to create an environment: ```python # Use a RunConfiguration to specify some additional requirements for this step. Timeout for debug connection: 300 ip_address: 10.3.0.5 ``` -Save the `ip_address` value. It is used in the next section. +Save the `ip_address` value. It's used in the next section. > [!TIP] > You can also find the IP address from the run logs for the child run for this pipeline step. For more information on viewing this information, see [Monitor Azure ML experiment runs and metrics](how-to-log-view-metrics.md). Save the `ip_address` value. It is used in the next section. ## Debug and troubleshoot deployments -In some cases, you may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason cannot be determined by additional logging. By using VS Code and the debugpy, you can attach to the code running inside the Docker container. +In some cases, you may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason can't be determined by extra logging. By using VS Code and the debugpy, you can attach to the code running inside the Docker container. > [!TIP] > Save time and catch bugs early by debugging managed online endpoints and deployments locally. For more information, see [Debug managed online endpoints locally in Visual Studio Code (preview)](how-to-debug-managed-online-endpoints-visual-studio-code.md). In some cases, you may need to interactively debug the Python code contained in > [!IMPORTANT] > This method of debugging does not work when using `Model.deploy()` and `LocalWebservice.deploy_configuration` to deploy a model locally. Instead, you must create an image using the [Model.package()](/python/api/azureml-core/azureml.core.model.model#package-workspace--models--inference-config-none--generate-dockerfile-false-) method. -Local web service deployments require a working Docker installation on your local system. For more information on using Docker, see the [Docker Documentation](https://docs.docker.com/). Note that when working with compute instances, Docker is already installed. +Local web service deployments require a working Docker installation on your local system. For more information on using Docker, see the [Docker Documentation](https://docs.docker.com/). When working with compute instances, Docker is already installed. ### Configure development environment Local web service deployments require a working Docker installation on your loca package.pull() ``` - Once the image has been created and downloaded (this process may take more than 10 minutes, so please wait patiently), the image path (includes repository, name, and tag, which in this case is also its digest) is finally displayed in a message similar to the following: + Once the image has been created and downloaded (this process may take more than 10 minutes), the image path (includes repository, name, and tag, which in this case is also its digest) is finally displayed in a message similar to the following: ```text Status: Downloaded newer image for myregistry.azurecr.io/package@sha256:<image-digest> Local web service deployments require a working Docker installation on your loca docker run -it --name debug -p 8000:5001 -p 5678:5678 -v <my_local_path_to_score.py>:/var/azureml-app/score.py debug:1 /bin/bash ``` - This attaches your `score.py` locally to the one in the container. Therefore, any changes made in the editor are automatically reflected in the container + This command attaches your `score.py` locally to the one in the container. Therefore, any changes made in the editor are automatically reflected in the container -2. For a better experience, you can go into the container with a new VS code interface. Select the `Docker` extention from the VS Code side bar, find your local container created, in this documentation it's `debug:1`. Right-click this container and select `"Attach Visual Studio Code"`, then a new VS Code interface will be opened automatically, and this interface shows the inside of your created container. +2. For a better experience, you can go into the container with a new VS code interface. Select the `Docker` extention from the VS Code side bar, find your local container created, in this documentation its `debug:1`. Right-click this container and select `"Attach Visual Studio Code"`, then a new VS Code interface will be opened automatically, and this interface shows the inside of your created container.  Local web service deployments require a working Docker installation on your loca  -4. To attach VS Code to debugpy inside the container, open VS Code and use the F5 key or select __Debug__. When prompted, select the __Azure Machine Learning Deployment: Docker Debug__ configuration. You can also select the __Run__ extention icon from the side bar, the __Azure Machine Learning Deployment: Docker Debug__ entry from the Debug dropdown menu, and then use the green arrow to attach the debugger. +4. To attach VS Code to debugpy inside the container, open VS Code, and use the F5 key or select __Debug__. When prompted, select the __Azure Machine Learning Deployment: Docker Debug__ configuration. You can also select the __Run__ extention icon from the side bar, the __Azure Machine Learning Deployment: Docker Debug__ entry from the Debug dropdown menu, and then use the green arrow to attach the debugger.  - After clicking the green arrow and attaching the debugger, in the container VS Code interface you can see some new information: + After you select the green arrow and attach the debugger, in the container VS Code interface you can see some new information:  Now that you've set up VS Code Remote, you can use a compute instance as remote Learn more about troubleshooting: -* [Local model deployment](how-to-troubleshoot-deployment-local.md) -* [Remote model deployment](how-to-troubleshoot-deployment.md) +* [Local model deployment](./v1/how-to-troubleshoot-deployment-local.md) +* [Remote model deployment](./v1/how-to-troubleshoot-deployment.md) * [Machine learning pipelines](how-to-debug-pipelines.md) * [ParallelRunStep](how-to-debug-parallel-run-step.md) |
machine-learning | How To Deploy Batch With Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md | Below are some examples using different types of input data. > - If you want to use local data, you can upload it to Azure Machine Learning registered datastore and use REST API for Cloud data. > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). +> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). #### Configure the output location and overwrite settings |
machine-learning | How To Generate Automl Training Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md | With the generated model's training code you can, * **Track/version/audit** trained models. Store versioned code to track what specific training code is used with the model that's to be deployed to production. * **Customize** the training code by changing hyperparameters or applying your ML and algorithms skills/experience, and retrain a new model with your customized code. -You can generate the code for automated ML experiments with task types classification, regression, and time-series forecasting. --> [!WARNING] -> Computer vision models and natural language processing based models in AutoML do not currently support model's training code generation. --The following diagram illustrates that you can enable code generation for any AutoML created model from the Azure Machine Learning studio UI or with the Azure Machine Learning SDK. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed. +The following diagram illustrates that you can generate the code for automated ML experiments with all task types. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed. :::image type="content" source="media/how-to-generate-automl-training-code/code-generation-demonstration.png" alt-text="Screenshot showing models tab, as well as having a model selected, as explained in the above text."::: |
machine-learning | How To Manage Workspace Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md | -In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources. -> [!NOTE] -> Examples in this article refer to both CLI v1 and CLI v2 versions. If no version is specified for a command, it will work with either the v1 or CLI v2. +> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"] +> * [v1](v1/how-to-manage-workspace-cli.md) +> * [v2 (current version)](how-to-manage-workspace-cli.md) ++In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources. ## Prerequisites -* An **Azure subscription**. If you do not have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). +* An **Azure subscription**. If you don't have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli). In this article, you learn how to create and manage Azure Machine Learning works Some of the Azure CLI commands communicate with Azure Resource Manager over the internet. This communication is secured using HTTPS/TLS 1.2. -# [CLI v1](#tab/vnetpleconfigurationsv1cli) --With the Azure Machine Learning CLI extension v1 (`azure-cli-ml`), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. **If your workspace is [secured with a private endpoint](how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension**. +With the Azure Machine Learning CLI extension v2 ('ml'), all of the commands communicate with the Azure Resource Manager. This includes operational data such as YAML parameters and metadata. If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there's no extra configuration required. Communications are secured using HTTPS/TLS 1.2. -# [CLI v2](#tab/vnetpleconfigurationsv2cli) +If your Azure Machine Learning workspace uses a private endpoint and virtual network and you're using CLI v2, choose one of the following configurations to use: -With the Azure Machine Learning CLI extension v2 ('ml'), all of the commands communicate with the Azure Resource Manager. This includes operational data such as YAML parameters and metadata. If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there is no additional configuration required. Communications are secured using HTTPS/TLS 1.2. --If your Azure Machine Learning workspace uses a private endpoint and virtual network and you are using CLI v2, choose one of the following configurations to use: --* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access: +* If you're __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access: ```azurecli az ml workspace update --name myworkspace --public-network-access enabled If your Azure Machine Learning workspace uses a private endpoint and virtual net For more information on CLI v2 communication, see [Install and set up the CLI](how-to-configure-cli.md#secure-communications). -- ## Connect the CLI to your Azure subscription > [!IMPORTANT] To create a new workspace where the __services are automatically created__, use az ml workspace create -w <workspace-name> -g <resource-group-name> ``` -# [Bring existing resources (CLI v1)](#tab/bringexistingresources1) ---To create a workspace that uses existing resources, you must provide the resource ID for each resource. You can get this ID either via the 'properties' tab on each resource via the Azure portal, or by running the following commands using the Azure CLI. +# [Bring existing resources](#tab/bringexistingresources) - * **Azure Storage Account**: - `az storage account show --name <storage-account-name> --query "id"` - * **Azure Application Insights**: - `az monitor app-insights component show --app <application-insight-name> -g <resource-group-name> --query "id"` - * **Azure Key Vault**: - `az keyvault show --name <key-vault-name> --query "ID"` - * **Azure Container Registry**: - `az acr show --name <acr-name> -g <resource-group-name> --query "id"` -- The returned resource ID has the following format: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`. --Once you have the IDs for the resource(s) that you want to use with the workspace, use the base `az workspace create -w <workspace-name> -g <resource-group-name>` command and add the parameter(s) and ID(s) for the existing resources. For example, the following command creates a workspace that uses an existing container registry: --```azurecli-interactive -az ml workspace create -w <workspace-name> - -g <resource-group-name> - --container-registry "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<acr-name>" -``` --# [Bring existing resources (CLI v2)](#tab/bringexistingresources2) ---To create a new workspace while bringing existing associated resources using the CLI, you will first have to define how your workspace should be configured in a configuration file. +To create a new workspace while bringing existing associated resources using the CLI, you'll first have to define how your workspace should be configured in a configuration file. :::code language="YAML" source="~/azureml-examples-main/cli/resources/workspace/with-existing-resources.yml"::: If attaching existing resources, you must provide the ID for the resources. You * **Azure Container Registry**: `az acr show --name <acr-name> -g <resource-group-name> --query "id"` -The Resource ID value looks similar to the following: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`. +The Resource ID value looks similar to the following text: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`. The output of the workspace creation command is similar to the following JSON. Y Dependent on your use case and organizational requirements, you can choose to configure Azure Machine Learning using private network connectivity. You can use the Azure CLI to deploy a workspace and a Private link endpoint for the workspace resource. For more information on using a private endpoint and virtual network (VNet) with your workspace, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md). For complex resource configurations, also refer to template based deployment options including [Azure Resource Manager](how-to-create-workspace-template.md). -# [CLI v1](#tab/vnetpleconfigurationsv1cli) ---If you want to restrict access to your workspace to a virtual network, you can use the following parameters as part of the `az ml workspace create` command or use the `az ml workspace private-endpoint` commands. --```azurecli-interactive -az ml workspace create -w <workspace-name> - -g <resource-group-name> - --pe-name "<pe name>" - --pe-auto-approval "<pe-autoapproval>" - --pe-resource-group "<pe name>" - --pe-vnet-name "<pe name>" - --pe-subnet-name "<pe name>" -``` --* `--pe-name`: The name of the private endpoint that is created. -* `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved. -* `--pe-resource-group`: The resource group to create the private endpoint in. Must be the same group that contains the virtual network. -* `--pe-vnet-name`: The existing virtual network to create the private endpoint in. -* `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`. --For more details on how to use these commands, see the [CLI reference pages](/cli/azure/ml(v1)/workspace). --# [CLI v2](#tab/vnetpleconfigurationsv2cli) ---When using private link, your workspace cannot use Azure Container Registry to build docker images. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property. +When using private link, your workspace can't use Azure Container Registry to build docker images. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property. :::code language="YAML" source="~/azureml-examples-main/cli/resources/workspace/privatelink.yml"::: az network private-endpoint dns-zone-group add \ --zone-name 'privatelink.notebooks.azure.net' ``` -- ### Customer-managed key and high business impact workspace -By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an additional set of resources in your Azure subscription to store your data. +By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an extra set of resources in your Azure subscription to store your data. To learn more about the resources that are created when you bring your own key for encryption, see [Data encryption with Azure Machine Learning](./concept-data-encryption.md#azure-cosmos-db). -Below CLI commands provide examples for creating a workspace that uses customer-managed keys for encryption using the CLI v1 and CLI v2 versions. --# [CLI v1](#tab/vnetpleconfigurationsv1cli) ---Use the `--cmk-keyvault` parameter to specify the Azure Key Vault that contains the key, and `--resource-cmk-uri` to specify the resource ID and uri of the key within the vault. --To [limit the data that Microsoft collects](./concept-data-encryption.md#encryption-at-rest) on your workspace, you can additionally specify the `--hbi-workspace` parameter. --```azurecli-interactive -az ml workspace create -w <workspace-name> - -g <resource-group-name> - --cmk-keyvault "<cmk keyvault name>" - --resource-cmk-uri "<resource cmk uri>" - --hbi-workspace -``` --# [CLI v2](#tab/vnetpleconfigurationsv2cli) -- Use the `customer_managed_key` parameter and containing `key_vault` and `key_uri` parameters, to specify the resource ID and uri of the key within the vault. To [limit the data that Microsoft collects](./concept-data-encryption.md#encryption-at-rest) on your workspace, you can additionally specify the `hbi_workspace` property. Then, you can reference this configuration file as part of the workspace creatio ```azurecli-interactive az ml workspace create -g <resource-group-name> --file cmk.yml ```- > [!NOTE] > Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription to manage the data encryption additional resources. For more information on customer-managed keys and high business impact workspace To get information about a workspace, use the following command: -# [CLI v1](#tab/workspaceupdatev1) ---```azurecli-interactive -az ml workspace show -w <workspace-name> -g <resource-group-name> -``` --# [CLI v2](#tab/workspaceupdatev2) -- ```azurecli-interactive az ml workspace show -n <workspace-name> -g <resource-group-name> ``` -- For more information, see the [az ml workspace show](/cli/azure/ml/workspace#az-ml-workspace-show) documentation. ### Update a workspace To update a workspace, use the following command: -# [CLI v1](#tab/workspaceupdatev1) ---```azurecli-interactive -az ml workspace update -w <workspace-name> -g <resource-group-name> -``` --# [CLI v2](#tab/workspaceupdatev2) -- ```azurecli-interactive az ml workspace update -n <workspace-name> -g <resource-group-name> ``` --- For more information, see the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) documentation. ### Sync keys for dependent resources If you change access keys for one of the resources used by your workspace, it takes around an hour for the workspace to synchronize to the new key. To force the workspace to sync the new keys immediately, use the following command: -# [CLI v1](#tab/workspacesynckeysv1) ---```azurecli-interactive -az ml workspace sync-keys -w <workspace-name> -g <resource-group-name> -``` --# [CLI v2](#tab/workspacesynckeysv2) -- ```azurecli-interactive az ml workspace sync-keys -n <workspace-name> -g <resource-group-name> ``` -- For more information on changing keys, see [Regenerate storage access keys](how-to-change-storage-access-key.md). For more information on the sync-keys command, see [az ml workspace sync-keys](/cli/azure/ml/workspace#az-ml-workspace-sync-keys). For more information on the sync-keys command, see [az ml workspace sync-keys](/ [!INCLUDE [machine-learning-delete-workspace](../../includes/machine-learning-delete-workspace.md)] -To delete a workspace after it is no longer needed, use the following command: --# [CLI v1](#tab/workspacedeletev1) ----```azurecli-interactive -az ml workspace delete -w <workspace-name> -g <resource-group-name> -``` --# [CLI v2](#tab/workspacedeletev2) -+To delete a workspace after it's no longer needed, use the following command: ```azurecli-interactive az ml workspace delete -n <workspace-name> -g <resource-group-name> ``` --- > [!IMPORTANT] > Deleting a workspace does not delete the application insight, storage account, key vault, or container registry used by the workspace. az group delete -g <resource-group-name> For more information, see the [az ml workspace delete](/cli/azure/ml/workspace#az-ml-workspace-delete) documentation. -If you accidentally deleted your workspace, are still able to retrieve your notebooks. Please refer to [this documentation](./how-to-high-availability-machine-learning.md#workspace-deletion). +If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article. ## Troubleshooting |
machine-learning | How To Monitor Datasets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-datasets.md | Learn how to monitor data drift and set alerts when drift is high. With Azure Machine Learning dataset monitors (preview), you can: * **Analyze drift in your data** to understand how it changes over time.-* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md). +* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](v1/how-to-enable-data-collection.md). * **Monitor new data** for differences between any baseline and target dataset. * **Profile features in data** to track how statistical properties change over time. * **Set up alerts on data drift** for early warnings to potential issues. Limitations and known issues for data drift monitors: ## Next steps * Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) to set up a dataset monitor.-* See how to set up data drift on [models deployed to Azure Kubernetes Service](./how-to-enable-data-collection.md). -* Set up dataset drift monitors with [event grid](how-to-use-event-grid.md). +* See how to set up data drift on [models deployed to Azure Kubernetes Service](v1/how-to-enable-data-collection.md). +* Set up dataset drift monitors with [Azure Event Grid](how-to-use-event-grid.md). |
machine-learning | How To Prebuilt Docker Images Inference Python Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility.md | Here are some things that may cause this problem: ## Best Practices -* Refer to the [Load registered model](how-to-deploy-advanced-entry-script.md#load-registered-models) docs. When you register a model directory, don't include your scoring script, your mounted dependencies directory, or `requirements.txt` within that directory. +* Refer to the [Load registered model](./v1/how-to-deploy-advanced-entry-script.md#load-registered-models) docs. When you register a model directory, don't include your scoring script, your mounted dependencies directory, or `requirements.txt` within that directory. * For more information on how to load a registered or local model, see [Where and how to deploy](how-to-deploy-and-where.md?tabs=azcli#define-a-dummy-entry-script). Here are some things that may cause this problem: ### 2021-07-26 * `AZUREML_EXTRA_REQUIREMENTS_TXT` and `AZUREML_EXTRA_PYTHON_LIB_PATH` are now always relative to the directory of the score script.-For example, if the both the requirements.txt and score script is in **my_folder**, then `AZUREML_EXTRA_REQUIREMENTS_TXT` will need to be set to requirements.txt. No longer will `AZUREML_EXTRA_REQUIREMENTS_TXT` be set to **my_folder/requirements.txt**. +For example, if both the requirements.txt and score script is in **my_folder**, then `AZUREML_EXTRA_REQUIREMENTS_TXT` will need to be set to requirements.txt. No longer will `AZUREML_EXTRA_REQUIREMENTS_TXT` be set to **my_folder/requirements.txt**. ## Next steps -To learn more about deploying a model, see [How to deploy a model](how-to-deploy-and-where.md). +To learn more about deploying a model, see [How to deploy a model](./v1/how-to-deploy-and-where.md). To learn how to troubleshoot prebuilt docker image deployments, see [how to troubleshoot prebuilt Docker image deployments](how-to-troubleshoot-prebuilt-docker-image-inference.md). |
machine-learning | How To Schedule Pipeline Job | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md | + + Title: Schedule Azure Machine Learning pipeline jobs (preview) ++description: Learn how to schedule pipeline jobs that allow you to automate routine, time-consuming tasks such as data processing, training, and monitoring. +++++ Last updated : 08/15/2022++++++# Schedule machine learning pipeline jobs (preview) +++> [!IMPORTANT] +> SDK v2 is currently in public preview. +> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as retrain models or do batch predictions regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI and SDK. ++## Prerequisites ++- You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. ++# [Azure CLI](#tab/cliv2) ++- Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md). ++- Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md). ++# [Python](#tab/python) ++- Create an Azure Machine Learning workspace if you don't have one. +- The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2). ++++## Schedule a pipeline job ++To run a pipeline job on a recurring basis, you'll need to create a schedule. A `Schedule` associates a job, and a trigger. The trigger can either be `cron` that use cron expression to describe the wait between runs or `recurrence` that specify using what frequency to trigger job. In each case, you need to define a pipeline job first, it can be existing pipeline jobs or a pipeline job define inline, refer to [Create a pipeline job in CLI](how-to-create-component-pipelines-cli.md) and [Create a pipeline job in SDK](how-to-create-component-pipeline-python.md). ++You can schedule a pipeline job yaml in local or an existing pipeline job in workspace. ++## Create a schedule ++### Create a time-based schedule with recurrence pattern ++# [Azure CLI](#tab/cliv2) ++++`trigger` contains the following properties: ++- **(Required)** `type` specifies the schedule type is `recurrence`. It can also be `cron`, see details in the next section. ++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_recurrence)] ++`RecurrenceTrigger` contains following properties: ++- **(Required)** To provide better coding experience, we use `RecurrenceTrigger` for recurrence schedule. ++++- **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can be `minute`, `hour`, `day`, `week`, `month`. + +- **(Required)** `interval` specifies how often the schedule fires based on the frequency, which is the number of time units to wait until the schedule fires again. + +- (Optional) `schedule` defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`. + - When `frequency` is `day`, pattern can specify `hours` and `minutes`. + - When `frequency` is `week` and `month`, pattern can specify `hours`, `minutes` and `weekdays`. + - `hours` should be an integer or a list, from 0 to 23. + - `minutes` should be an integer or a list, from 0 to 59. + - `weekdays` can be a string or list from `monday` to `sunday`. + - If `schedule` is omitted, the job(s) will be triggered according to the logic of `start_time`, `frequency` and `interval`. ++- (Optional) `start_time` describes the start date and time with timezone. If `start_time` is omitted, start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time. ++- (Optional) `end_time` describes the end date and time with timezone. If `end_time` is omitted, the schedule will continue trigger jobs until the schedule is manually disabled. ++- (Optional) `time_zone` specifies the time zone of the recurrence. If omitted, by default is UTC. To learn more about timezone values, see [appendix for timezone values](reference-yaml-schedule.md#appendix). ++### Create a time-based schedule with cron expression ++# [Azure CLI](#tab/cliv2) ++++The `trigger` section defines the schedule details and contains following properties: ++- **(Required)** `type` specifies the schedule type is `cron`. ++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_cron)] ++The `CronTrigger` section defines the schedule details and contains following properties: ++- **(Required)** To provide better coding experience, we use `CronTrigger` for recurrence schedule. ++++- **(Required)** `expression` uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields: ++ `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK` ++ - A single wildcard (`*`), which covers all values for the field. So a `*` in days means all days of a month (which varies with month and year). + - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday. + - The table below lists the valid values for each field: + + | Field | Range | Comment | + |-|-|--| + | `MINUTES` | 0-59 | - | + | `HOURS` | 0-23 | - | + | `DAYS` | - | Not supported. The value will be ignored and treat as `*`. | + | `MONTHS` | - | Not supported. The value will be ignored and treat as `*`. | + | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. | ++ - To learn more about how to use crontab expression, see [Crontab Expression wiki on GitHub ](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression). ++ > [!IMPORTANT] + > `DAYS` and `MONTH` are not supported. If you pass a value, it will be ignored and treat as `*`. ++- (Optional) `start_time` specifies the start date and time with timezone of the schedule. `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in UTC-4 timezone. If `start_time` is omitted, the `start_time` will be equal to schedule creation time. If the start time is in the past, the first job will run at the next calculated run time. ++- (Optional) `end_time` describes the end date and time with timezone. If `end_time` is omitted, the schedule will continue trigger jobs until the schedule is manually disabled. ++- (Optional) `time_zone`specifies the time zone of the expression. If omitted, by default is UTC. See [appendix for timezone values](reference-yaml-schedule.md#appendix). ++### Change runtime settings when defining schedule ++When defining a schedule using an existing job, you can change the runtime settings of the job. Using this approach, you can define multi-schedules using the same job with different inputs. ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=change_run_settings)] ++++Following properties can be changed when defining schedule: ++| Property | Description | +| | | +|settings| A dictionary of settings to be used when running the pipeline job. | +|inputs| A dictionary of inputs to be used when running the pipeline job. | +|outputs| A dictionary of inputs to be used when running the pipeline job. | +|experiment_name|Experiment name of triggered job.| ++### Expressions supported in schedule ++When define schedule, we support following expression that will be resolved to real value during job runtime. ++| Expression | Description |Supported properties| +|-|-|-| +|`${{create_context.trigger_time}}`|The time when the schedule is triggered.|String type inputs of pipeline job| +|`${{name}}`|The name of job.|outputs.path of pipeline job| ++## Manage schedule ++### Create schedule ++# [Azure CLI](#tab/cliv2) +++After you create the schedule yaml, you can use the following command to create a schedule via CLI. +++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)] ++++### Check schedule detail ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=show_schedule)] ++++### List schedules in a workspace ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=list_schedule)] ++++### Update a schedule ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)] ++++### Disable a schedule ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) ++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=disable_schedule)] ++++### Enable a schedule ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=enable_schedule)] ++++## Query triggered jobs from a schedule ++All the display name of jobs triggered by schedule will have the display name as <schedule_name>-YYYYMMDDThhmmssZ. For example, if a schedule with a name of named-schedule is created with a scheduled run every 12 hours starting at 6 AM on Jan 1 2021, then the display names of the jobs created will be as follows: ++- named-schedule-20210101T060000Z +- named-schedule-20210101T180000Z +- named-schedule-20210102T060000Z +- named-schedule-20210102T180000Z, and so on +++You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to query the jobs triggered by a schedule name. +++++## Delete a schedule ++> [!IMPORTANT] +> A schedule must be disabled to be deleted. ++# [Azure CLI](#tab/cliv2) ++++# [Python](#tab/python) +++[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=delete_schedule)] ++++## Next steps ++* Learn more about the [CLI (v2) schedule YAML schema](./reference-yaml-schedule.md). +* Learn how to [create pipeline job in CLI v2](how-to-create-component-pipelines-cli.md). +* Learn how to [create pipeline job in SDK v2](how-to-create-component-pipeline-python.md). +* Learn more about [CLI (v2) core YAML syntax](reference-yaml-core-syntax.md). +* Learn more about [Pipelines](concept-ml-pipelines.md). +* Learn more about [Component](concept-component.md). +++> [!NOTE] +> Information the user should notice even if skimming |
machine-learning | How To Set Up Training Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-training-targets.md | method, or from the Experiment tab view in Azure Machine Learning studio client * [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) uses a managed compute target to train a model. * See how to train models with specific ML frameworks, such as [Scikit-learn](how-to-train-scikit-learn.md), [TensorFlow](how-to-train-tensorflow.md), and [PyTorch](how-to-train-pytorch.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models.-* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md). +* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md). * View the [ScriptRunConfig class](/python/api/azureml-core/azureml.core.scriptrunconfig) SDK reference. * [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md) |
machine-learning | How To Setup Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md | Learn how to set up authentication to your Azure Machine Learning workspace from * __Service principal__: You create a service principal account in Azure Active Directory, and use it to authenticate or get a token. A service principal is used when you need an _automated process to authenticate_ to the service without requiring user interaction. For example, a continuous integration and deployment script that trains and tests a model every time the training code changes. -* __Azure CLI session__: You use an active Azure CLI session to authenticate. The Azure CLI extension for Machine Learning (the `ml` extension or CLI v2) is a command line tool for working with Azure Machine Learning. You can log in to Azure via the Azure CLI on your local workstation, without storing credentials in Python code or prompting the user to authenticate. Similarly, you can reuse the same scripts as part of continuous integration and deployment pipelines, while authenticating the Azure CLI with a service principal identity. +* __Azure CLI session__: You use an active Azure CLI session to authenticate. The Azure CLI extension for Machine Learning (the `ml` extension or CLI v2) is a command line tool for working with Azure Machine Learning. You can sign in to Azure via the Azure CLI on your local workstation, without storing credentials in Python code or prompting the user to authenticate. Similarly, you can reuse the same scripts as part of continuous integration and deployment pipelines, while authenticating the Azure CLI with a service principal identity. * __Managed identity__: When using the Azure Machine Learning SDK v2 _on a compute instance_ or _on an Azure Virtual Machine_, you can use a managed identity for Azure. This workflow allows the VM to connect to the workspace using the managed identity, without storing credentials in Python code or prompting the user to authenticate. Azure Machine Learning compute clusters can also be configured to use a managed identity to access the workspace when _training models_. The service principal can also be used to authenticate to the Azure Machine Lear For information and samples on authenticating with MSAL, see the following articles: * JavaScript - [How to migrate a JavaScript app from ADAL.js to MSAL.js](../active-directory/develop/msal-compare-msal-js-and-adal-js.md).-* Node.js - [How to migrate a Node.js app from ADAL to MSAL](../active-directory/develop/msal-node-migration.md). -* Python - [ADAL to MSAL migration guide for Python](../active-directory/develop/migrate-python-adal-msal.md). +* Node.js - [How to migrate a Node.js app from Microsoft Authentication Library to MSAL](../active-directory/develop/msal-node-migration.md). +* Python - [Microsoft Authentication Library to MSAL migration guide for Python](../active-directory/develop/migrate-python-adal-msal.md). ## Use managed identity authentication can require two-factor authentication, or allow sign in only from managed device ## Next steps * [How to use secrets in training](how-to-use-secrets-in-runs.md).-* [How to configure authentication for models deployed as a web service](how-to-authenticate-web-service.md). -* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md). +* [How to authenticate to online endpoints](how-to-authenticate-online-endpoint.md). |
machine-learning | How To Train Mlflow Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md | The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot ## Next steps * [Deploy models with MLflow](how-to-deploy-mlflow-models.md).-* Monitor your production models for [data drift](./how-to-enable-data-collection.md). +* Monitor your production models for [data drift](v1/how-to-enable-data-collection.md). * [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md). * [Manage your models](concept-model-management-and-deployment.md). |
machine-learning | How To Troubleshoot Prebuilt Docker Image Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-prebuilt-docker-image-inference.md | Learn how to troubleshoot problems you may see when using prebuilt docker images ## Model deployment failed -If model deployment fails, you won't see logs in [Azure Machine Learning Studio](https://ml.azure.com/) and `service.get_logs()` will return None. +If model deployment fails, you won't see logs in [Azure Machine Learning studio](https://ml.azure.com/) and `service.get_logs()` will return None. If there is a problem in the init() function of score.py, `service.get_logs()` will return logs for the same. So you'll need to run the container locally using one of the commands shown below and replace `<MCR-path>` with an image path. For a list of the images and paths, see [Prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md). The local inference server allows you to quickly debug your entry script (`score ## For common model deployment issues -For problems when deploying a model from Azure Machine Learning to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS), see [Troubleshoot model deployment](how-to-troubleshoot-deployment.md). +For problems when deploying a model from Azure Machine Learning to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS), see [Troubleshoot model deployment](./v1/how-to-troubleshoot-deployment.md). ## init() or run() failing to write a file GPU base images can't be used for local deployment, unless the local deployment /var/azureml-app ``` -* If the `ENTRYPOINT` has been changed in the new built image, then the HTTP server and related components needs to be loaded by `runsvdir /var/runit` +* If the `ENTRYPOINT` has been changed in the new built image, then the HTTP server and related components need to be loaded by `runsvdir /var/runit` ## Next steps |
machine-learning | How To Use Automated Ml For Ml Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md | Now you have an operational web service to generate predictions! You can test th ## Next steps -* [Learn how to consume a web service](how-to-consume-web-service.md). +* [Learn how to consume a web service](v1/how-to-consume-web-service.md). * [Understand automated machine learning results](how-to-understand-automated-ml.md). * [Learn more about automated machine learning](concept-automated-ml.md) and Azure Machine Learning. |
machine-learning | How To Use Batch Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md | There are several options to specify the data inputs in CLI `invoke`. > [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). +> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md). #### Configure the output location and overwrite settings |
machine-learning | How To Use Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md | - Title: Work with data using SDK v2 (preview)- -description: 'Learn to how work with data using the Python SDK v2 preview for Azure Machine Learning.' ----- Previously updated : 05/10/2022-----# Work with data using SDK v2 preview ---Azure Machine Learning allows you to work with different types of data. In this article, you'll learn about using the Python SDK v2 to work with _URIs_ and _Tables_. URIs reference a location either local to your development environment or in the cloud. Tables are a tabular data abstraction. --For most scenarios, you'll use URIs (`uri_folder` and `uri_file`). A URI references a location in storage that can be easily mapped to the filesystem of a compute node when you run a job. The data is accessed by either mounting or downloading the storage to the node. --When using tables, you'll use `mltable`. It's an abstraction for tabular data that is used for AutoML jobs, parallel jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning, and aren't using AutoML, we strongly encourage you to begin with URIs. --> [!TIP] -> If you have dataset assets created using the SDK v1, you can still use those with SDK v2. For more information, see the [Consuming V1 Dataset Assets in V2](#consuming-v1-dataset-assets-in-v2) section. ----## Prerequisites --* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. -* An Azure Machine Learning workspace. -* The Azure Machine Learning SDK v2 for Python ----## URIs --The code snippets in this section cover the following scenarios: --* Reading data in a job -* Reading *and* writing data in a job -* Registering data as an asset in Azure Machine Learning -* Reading registered data assets from Azure Machine Learning in a job --These snippets use `uri_file` and `uri_folder`. --- `uri_file` is a type that refers to a specific file. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv'`.-- `uri_folder` is a type that refers to a specific folder. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path'`. --> [!TIP] -> We recommend using an argument parser to pass folder information into _data-plane_ code. By data-plane code, we mean your data processing and/or training code that you run in the cloud. The code that runs in your development environment and submits code to the data-plane is _control-plane_ code. -> -> Data-plane code is typically a Python script, but can be any programming language. Passing the folder as part of job submission allows you to easily adjust the path from training locally using local data, to training in the cloud. For example, the following example uses `argparse` to get a `uri_folder`, which is joined with the file name to form a path: -> -> ```python -> # train.py -> import argparse -> import os -> import pandas as pd -> -> parser = argparse.ArgumentParser() -> parser.add_argument("--input_folder", type=str) -> args = parser.parse_args() -> -> file_name = os.path.join(args.input_folder, "MY_CSV_FILE.csv") -> df = pd.read_csv(file_name) -> print(df.head(10)) -> # process data -> # train a model -> # etc -> ``` -> -> If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type. --Below are some common data access patterns that you can use in your *control-plane* code to submit a job to Azure Machine Learning: --### Use data with a training job --Use the tabs below to select where your data is located. --# [Local data](#tab/use-local) --When you pass local data, the data is automatically uploaded to cloud storage as part of the job submission. --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data -from azure.ai.ml.constants import AssetTypes --my_job_inputs = { - "input_data": Input( - path='./sample_data', # change to be your local directory - type=AssetTypes.URI_FOLDER - ) -} --job = command( - code="./src", # local path where the code is stored - command='python train.py --input_folder ${{inputs.input_data}}', - inputs=my_job_inputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` --# [ADLS Gen2](#tab/use-adls) --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, CommandJob -from azure.ai.ml.constants import AssetTypes --# in this example we -my_job_inputs = { - "input_data": Input( - path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', - type=AssetTypes.URI_FOLDER - ) -} --job = command( - code="./src", # local path where the code is stored - command='python train.py --input_folder ${{inputs.input_data}}', - inputs=my_job_inputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` --# [Blob](#tab/use-blob) --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, CommandJob -from azure.ai.ml.constants import AssetTypes --# in this example we -my_job_inputs = { - "input_data": Input( - path='https://<account_name>.blob.core.windows.net/<container_name>/path', - type=AssetTypes.URI_FOLDER - ) -} --job = command( - code="./src", # local path where the code is stored - command='python train.py --input_folder ${{inputs.input_data}}', - inputs=my_job_inputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` ----### Read and write data in a job --Use the tabs below to select where your data is located. --# [Blob](#tab/rw-blob) --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, CommandJob, JobOutput -from azure.ai.ml.constants import AssetTypes --my_job_inputs = { - "input_data": Input( - path='https://<account_name>.blob.core.windows.net/<container_name>/path', - type=AssetTypes.URI_FOLDER - ) -} --my_job_outputs = { - "output_folder": JobOutput( - path='https://<account_name>.blob.core.windows.net/<container_name>/path', - type=AssetTypes.URI_FOLDER - ) -} --job = command( - code="./src", #local path where the code is stored - command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}', - inputs=my_job_inputs, - outputs=my_job_outputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` --# [ADLS Gen2](#tab/rw-adls) --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, CommandJob, JobOutput -from azure.ai.ml.constants import AssetTypes --my_job_inputs = { - "input_data": Input( - path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', - type=AssetTypes.URI_FOLDER - ) -} --my_job_outputs = { - "output_folder": JobOutput( - path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>', - type=AssetTypes.URI_FOLDER - ) -} --job = command( - code="./src", #local path where the code is stored - command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}', - inputs=my_job_inputs, - outputs=my_job_outputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` ---### Register data assets --```python -from azure.ai.ml.entities import Data -from azure.ai.ml.constants import AssetTypes --# select one from: -my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2 -my_path = 'https://<account_name>.blob.core.windows.net/<container_name>/path' # blob --my_data = Data( - path=my_path, - type=AssetTypes.URI_FOLDER, - description="description here", - name="a_name", - version='1' -) --ml_client.data.create_or_update(my_data) -``` --### Consume registered data assets in job --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, Input, CommandJob -from azure.ai.ml.constants import AssetTypes --registered_data_asset = ml_client.data.get(name='titanic', version='1') --my_job_inputs = { - "input_data": Input( - type=AssetTypes.URI_FOLDER, - path=registered_data_asset.id - ) -} --job = command( - code="./src", - command='python read_data_asset.py --input_folder ${{inputs.input_data}}', - inputs=my_job_inputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` --## Table --An [MLTable](concept-data.md#mltable) is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable: --```yaml -paths: - - file: ./titanic.csv -transformations: - - read_delimited: - delimiter: ',' - encoding: 'ascii' - empty_as_string: false - header: from_first_file -``` --The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame. The important part here's that the MLTable-artifact doesn't have any absolute paths, making it *self-contained*. All the information stored in one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server. --To consume the data in a job or interactive session, use `mltable`: --```python -import mltable --tbl = mltable.load("./sample_data") -df = tbl.to_pandas_dataframe() -``` --For more information on the YAML file format, see [the MLTable file](how-to-create-register-data-assets.md#the-mltable-file). --<!-- Commenting until notebook is published. For a full example of using an MLTable, see the [Working with MLTable notebook]. --> --## Consuming V1 dataset assets in V2 --> [!NOTE] -> While full backward compatibility is provided, if your intention with your V1 `FileDataset` assets was to have a single path to a file or folder with no loading transforms (sample, take, filter, etc.), then we recommend that you re-create them as a `uri_file`/`uri_folder` using the v2 CLI: -> -> ```cli -> az ml data create --file my-data-asset.yaml -> ``` --Registered v1 `FileDataset` and `TabularDataset` data assets can be consumed in an v2 job using `mltable`. To use the v1 assets, add the following definition in the `inputs` section of your job yaml: --```yaml -inputs: - my_v1_dataset: - type: mltable - path: azureml:myv1ds:1 - mode: eval_mount -``` --The following example shows how to do this using the v2 SDK: --```python -from azure.ai.ml import Input, command -from azure.ai.ml.entities import Data, CommandJob -from azure.ai.ml.constants import AssetTypes --registered_v1_data_asset = ml_client.data.get(name='<ASSET NAME>', version='<VERSION NUMBER>') --my_job_inputs = { - "input_data": Input( - type=AssetTypes.MLTABLE, - path=registered_v1_data_asset.id, - mode="eval_mount" - ) -} --job = command( - code="./src", #local path where the code is stored - command='python train.py --input_data ${{inputs.input_data}}', - inputs=my_job_inputs, - environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9", - compute="cpu-cluster" -) --#submit the command job -returned_job = ml_client.jobs.create_or_update(job) -#get a URL for the status of the job -returned_job.services["Studio"].endpoint -``` --## Next steps --* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install) -* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md) -* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md) |
machine-learning | Migrate Rebuild Integrate With Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md | Title: 'Migrate to Azure Machine Learning - Consume pipeline endpoints' -description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (Classic). +description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (classic). Last updated 05/31/2022 [!INCLUDE [ML Studio (classic) retirement](../../includes/machine-learning-studio-classic-deprecation.md)] -In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md). +In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](v1/how-to-consume-web-service.md). This article is part of the ML Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md). You can call your Azure Machine Learning pipeline as a step in an Azure Data Fac ## Next steps -In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md). +In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](v1/how-to-consume-web-service.md). See the rest of the articles in the Azure Machine Learning migration series: |
machine-learning | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md | Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 |
machine-learning | Reference Managed Online Endpoints Vm Sku List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md | This table shows the VM SKUs that are supported for Azure Machine Learning manag > If you use a Windows-based image for your deployment, we recommend using a VM SKU that provides a minimum of 4 cores. | Size | General Purpose | Compute Optimized | Memory Optimized | GPU |-| | | | | | | -| V.Small | DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 | +| | | | | | +| V.Small | DS1 v2 <br/> DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 | | Small | DS3 v2 | F4s v2 | E4s v3 | NC6s v2 <br/> NC6s v3 <br/> NC8as_T4_v3 | | Medium | DS4 v2 | F8s v2 | E8s v3 | NC12s v2 <br/> NC12s v3 <br/> NC16as_T4_v3 | | Large | DS5 v2 | F16s v2 | E16s v3 | NC24s v2 <br/> NC24s v3 <br/> NC64as_T4_v3 | |
machine-learning | Reference Yaml Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md | + + Title: 'CLI (v2) schedule YAML schema' ++description: Reference documentation for the CLI (v2) schedule YAML schema. +++++++ Last updated : 08/15/2022++++# CLI (v2) schedule YAML schema +++The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/schedule.schema.json. ++++## YAML syntax ++| Key | Type | Description | Allowed values | +| | - | -- | -- | +| `$schema` | string | The YAML schema. | | +| `name` | string | **Required.** Name of the schedule. | | +| `version` | string | Version of the schedule. If omitted, Azure ML will autogenerate a version. | | +| `description` | string | Description of the schedule. | | +| `tags` | object | Dictionary of tags for the schedule. | | +| `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | | +| `create_job` | object or string | **Required.** The definition of the job that will be triggered by a schedule. **One of `string` or `JobDefinition` is required.**| | ++### Trigger configuration ++#### Recurrence trigger ++| Key | Type | Description | Allowed values | +| | - | -- | -- | +| `type` | string | **Required.** Specifies the schedule type. |recurrence| +|`frequency`| string | **Required.** Specifies the unit of time that describes how often the schedule fires.|`minute`, `hour`, `day`, `week`, `month`| +|`interval`| integer | **Required.** Specifies the interval at which the schedule fires.| | +|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.| +|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.| +|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)| +|`pattern`|object|Specifies the pattern of the recurrence. If pattern is omitted, the job(s) will be triggered according to the logic of start_time, frequency and interval.| | ++#### Recurrence schedule ++Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`. ++- When frequency is `day`, pattern can specify `hours` and `minutes`. +- When frequency is `week` and `month`, pattern can specify `hours`, `minutes` and `weekdays`. ++| Key | Type | Allowed values | +| | - | -- | +|`hours`|integer or array of integer|`0-23`| +|`minutes`|integer or array of integer|`0-59`| +|`week_days`|string or array of string|`monday`, `tuesday`, `wednesday`, `thursday`, `friday`, `saturday`, `sunday`| +++#### CronTrigger ++| Key | Type | Description | Allowed values | +| | - | -- | -- | +| `type` | string | **Required.** Specifies the schedule type. |cron| +| `expression` | string | **Required.** Specifies the cron expression to define how to trigger jobs. expression uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:`MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`|| +|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.| +|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.| +|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)| ++### Job definition ++Customer can directly use `create_job: azureml:<job_name>` or can use the following properties to define the job. ++| Key | Type | Description | Allowed values | +| | - | -- | -- | +|`type`| string | **Required.** Specifies the job type. Only pipeline job is supported.|`pipeline`| +|`job`| string | **Required.** Define how to reference a job, it can be `azureml:<job_name>` or a local pipeline job yaml such as `file:hello-pipeline.yml`.| | +| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, we'll take schedule name as default value. | | +|`inputs`| object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value.| | +|`outputs`|object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration.| | +| `settings` | object | Default settings for the pipeline job. See [Attributes of the `settings` key](#attributes-of-the-settings-key) for the set of configurable properties. | | ++### Attributes of the `settings` key ++| Key | Type | Description | Default value | +| | - | -- | - | +| `default_datastore` | string | Name of the datastore to use as the default datastore for the pipeline job. This value must be a reference to an existing datastore in the workspace using the `azureml:<datastore-name>` syntax. Any outputs defined in the `outputs` property of the parent pipeline job or child step jobs will be stored in this datastore. If omitted, outputs will be stored in the workspace blob datastore. | | +| `default_compute` | string | Name of the compute target to use as the default compute for all steps in the pipeline. If compute is defined at the step level, it will override this default compute for that specific step. This value must be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | | +| `continue_on_step_failure` | boolean | Whether the execution of steps in the pipeline should continue if one step fails. The default value is `False`, which means that if one step fails, the pipeline execution will be stopped, canceling any running steps. | `False` | ++### Job inputs ++| Key | Type | Description | Allowed values | Default value | +| | - | -- | -- | - | +| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` | +| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | | +| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` | ++### Job outputs ++| Key | Type | Description | Allowed values | Default value | +| | - | -- | -- | - | +| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` | +| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | | +| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` | ++## Remarks ++The `az ml schedule` command can be used for managing Azure Machine Learning models. ++## Examples ++Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/schedules). A couple are shown below. ++## YAML: Schedule with recurrence pattern ++++## YAML: Schedule with cron expression ++++## Appendix ++### Timezone ++Current schedule supports the following timezones. The key can be used directly in the Python SDK, while the value can be used in the YAML job. The table is organized by UTC(Coordinated Universal Time). ++| UTC | Key | Value | +|-||--| +| UTC -12:00 | DATELINE_STANDARD_TIME | "Dateline Standard Time" | +| UTC -11:00 | UTC_11 | "UTC-11" | +| UTC - 10:00 | ALEUTIAN_STANDARD_TIME | Aleutian Standard Time | +| UTC - 10:00 | HAWAIIAN_STANDARD_TIME | "Hawaiian Standard Time" | +| UTC -09:30 | MARQUESAS_STANDARD_TIME | "Marquesas Standard Time" | +| UTC -09:00 | ALASKAN_STANDARD_TIME | "Alaskan Standard Time" | +| UTC -09:00 | UTC_09 | "UTC-09" | +| UTC -08:00 | PACIFIC_STANDARD_TIME_MEXICO | "Pacific Standard Time (Mexico)" | +| UTC -08:00 | UTC_08 | "UTC-08" | +| UTC -08:00 | PACIFIC_STANDARD_TIME | "Pacific Standard Time" | +| UTC -07:00 | US_MOUNTAIN_STANDARD_TIME | "US Mountain Standard Time" | +| UTC -07:00 | MOUNTAIN_STANDARD_TIME_MEXICO | "Mountain Standard Time (Mexico)" | +| UTC -07:00 | MOUNTAIN_STANDARD_TIME | "Mountain Standard Time" | +| UTC -06:00 | CENTRAL_AMERICA_STANDARD_TIME | "Central America Standard Time" | +| UTC -06:00 | CENTRAL_STANDARD_TIME | "Central Standard Time" | +| UTC -06:00 | EASTER_ISLAND_STANDARD_TIME | "Easter Island Standard Time" | +| UTC -06:00 | CENTRAL_STANDARD_TIME_MEXICO | "Central Standard Time (Mexico)" | +| UTC -06:00 | CANADA_CENTRAL_STANDARD_TIME | "Canada Central Standard Time" | +| UTC -05:00 | SA_PACIFIC_STANDARD_TIME | "SA Pacific Standard Time" | +| UTC -05:00 | EASTERN_STANDARD_TIME_MEXICO | "Eastern Standard Time (Mexico)" | +| UTC -05:00 | EASTERN_STANDARD_TIME | "Eastern Standard Time" | +| UTC -05:00 | HAITI_STANDARD_TIME | "Haiti Standard Time" | +| UTC -05:00 | CUBA_STANDARD_TIME | "Cuba Standard Time" | +| UTC -05:00 | US_EASTERN_STANDARD_TIME | "US Eastern Standard Time" | +| UTC -05:00 | TURKS_AND_CAICOS_STANDARD_TIME | "Turks And Caicos Standard Time" | +| UTC -04:00 | PARAGUAY_STANDARD_TIME | "Paraguay Standard Time" | +| UTC -04:00 | ATLANTIC_STANDARD_TIME | "Atlantic Standard Time" | +| UTC -04:00 | VENEZUELA_STANDARD_TIME | "Venezuela Standard Time" | +| UTC -04:00 | CENTRAL_BRAZILIAN_STANDARD_TIME | "Central Brazilian Standard Time" | +| UTC -04:00 | SA_WESTERN_STANDARD_TIME | "SA Western Standard Time" | +| UTC -04:00 | PACIFIC_SA_STANDARD_TIME | "Pacific SA Standard Time" | +| UTC -03:30 | NEWFOUNDLAND_STANDARD_TIME | "Newfoundland Standard Time" | +| UTC -03:00 | TOCANTINS_STANDARD_TIME | "Tocantins Standard Time" | +| UTC -03:00 | E_SOUTH_AMERICAN_STANDARD_TIME | "E. South America Standard Time" | +| UTC -03:00 | SA_EASTERN_STANDARD_TIME | "SA Eastern Standard Time" | +| UTC -03:00 | ARGENTINA_STANDARD_TIME | "Argentina Standard Time" | +| UTC -03:00 | GREENLAND_STANDARD_TIME | "Greenland Standard Time" | +| UTC -03:00 | MONTEVIDEO_STANDARD_TIME | "Montevideo Standard Time" | +| UTC -03:00 | SAINT_PIERRE_STANDARD_TIME | "Saint Pierre Standard Time" | +| UTC -03:00 | BAHIA_STANDARD_TIM | "Bahia Standard Time" | +| UTC -02:00 | UTC_02 | "UTC-02" | +| UTC -02:00 | MID_ATLANTIC_STANDARD_TIME | "Mid-Atlantic Standard Time" | +| UTC -01:00 | AZORES_STANDARD_TIME | "Azores Standard Time" | +| UTC -01:00 | CAPE_VERDE_STANDARD_TIME | "Cape Verde Standard Time" | |