Updates from: 08/17/2022 01:09:13
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Migrate From Classic Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/migrate-from-classic-vnet.md
Previously updated : 03/07/2022 Last updated : 08/15/2022
Before you begin the migration process, complete the following initial checks an
Make sure that network settings don't block necessary ports required for Azure AD DS. Ports must be open on both the Classic virtual network and the Resource Manager virtual network. These settings include route tables (although it's not recommended to use route tables) and network security groups.
- Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain. To view the ports required, see [Network security groups and required ports][network-ports].
+ Azure AD DS needs a network security group to secure the ports needed for the managed domain and block all other incoming traffic. This network security group acts as an extra layer of protection to lock down access to the managed domain.
- If you use secure LDAP, add a rule to the network security group to allow incoming traffic for *TCP* port *636*. For more information, see [Lock down secure LDAP access over the internet](tutorial-configure-ldaps.md#lock-down-secure-ldap-access-over-the-internet)
+ The following network security group Inbound rules are required for the managed domain to provide authentication and management services. Don't edit or delete these network security group rules for the virtual network subnet your managed domain is deployed into.
+
+ | Inbound port number | Protocol | Source | Destination | Action | Required | Purpose |
+ |:--:|:--:|:-:|:--:|::|:--:|:--|
+ | 5986 | TCP | AzureActiveDirectoryDomainServices | Any | Allow | Yes | Management of your domain. |
+ | 3389 | TCP | CorpNetSaw | Any | Allow | Optional | Debugging for support. |
+ | 636 | TCP | AzureActiveDirectoryDomainServices | Inbound | Allow | Optional | Secure LDAP. |
Make a note of this target resource group, target virtual network, and target virtual network subnet. These resource names are used during the migration process.
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Additional context isn't supported for Network Policy Server (NPS).
## Next steps [Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)-
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
Number matching isn't supported for Apple Watch notifications. Apple Watch need
## Next steps
-[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 08/07/2022 Last updated : 08/16/2022
The fraud alert feature lets users report fraudulent attempts to access their re
The following fraud alert configuration options are available: * **Automatically block users who report fraud**. If a user reports fraud, the Azure AD Multi-Factor Authentication attempts for the user account are blocked for 90 days or until an administrator unblocks the account. An administrator can review sign-ins by using the sign-in report, and take appropriate action to prevent future fraud. An administrator can then [unblock](#unblock-a-user) the user's account.
-* **Code to report fraud during initial greeting**. When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it.
+* **Code to report fraud during initial greeting**. When users receive a phone call to perform multi-factor authentication, they normally press **#** to confirm their sign-in. To report fraud, the user enters a code before pressing **#**. This code is **0** by default, but you can customize it. If automatic blocking is enabled, after the user presses **0#** to report fraud, they need to press **1** to confirm the account blocking.
> [!NOTE] > The default voice greetings from Microsoft instruct users to press **0#** to submit a fraud alert. If you want to use a code other than **0**, record and upload your own custom voice greetings with appropriate instructions for your users.
active-directory Howto Conditional Access Policy Risk User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md
Previously updated : 03/21/2022 Last updated : 08/16/2022
After confirming your settings using [report-only mode](howto-conditional-access
## Next steps
+[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
+ [Conditional Access common policies](concept-conditional-access-policy-common.md) [Sign-in risk-based Conditional Access](howto-conditional-access-policy-risk.md)
active-directory Howto Conditional Access Policy Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md
Previously updated : 03/21/2022 Last updated : 08/16/2022
After confirming your settings using [report-only mode](howto-conditional-access
## Next steps
+[Remediate risks and unblock users](../identity-protection/howto-identity-protection-remediate-unblock.md)
+ [Conditional Access common policies](concept-conditional-access-policy-common.md) [User risk-based Conditional Access](howto-conditional-access-policy-risk-user.md)
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 07/06/2022 Last updated : 08/16/2022
To find out which Conditional Access policy or policies applied and why do the f
1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing. 1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event.
-### Policy details
+### Policy not working as intended
Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This option gives administrators additional information about why a policy was successfully applied or not.
Selecting the ellipsis on the right side of the policy in a sign-in event brings
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured.
-If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md).
+If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md). You can also [use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md).
If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
-### Conditional Access error codes
+### Common Conditional Access error codes
| Sign-in Error Code | Error String | | | |
If you need to submit a support incident, provide the request ID and time and da
| 53003 | BlockedByConditionalAccess | | 53004 | ProofUpBlockedDueToRisk |
+More information about error codes can be found in the article [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md). Error codes in the list appear with a prefix of `AADSTS` followed by the code seen in the browser, for example `AADSTS53002`.
+ ## Service dependencies In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources that are blocked by Conditional Access policy.
-To determine the service dependency, check the sign-ins log for the Application and Resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy.
+To determine the service dependency, check the sign-ins log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy.
:::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png":::
If you're locked out of the Azure portal due to an incorrect setting in a Condit
## Next steps
+- [Use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md)
- [Sign-in activity reports in the Azure Active Directory portal](../reports-monitoring/concept-sign-ins.md) - [Troubleshooting Conditional Access using the What If tool](troubleshoot-conditional-access-what-if.md)
active-directory What If Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md
Previously updated : 08/09/2022 Last updated : 08/16/2022
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/sample-v2-code.md
The following samples show public client desktop applications that access the Mi
## Mobile
-The following samples show public client mobile applications that access the Microsoft Graph API, or your own web API in the name of the user. These client applications use the Microsoft Authentication Library (MSAL).
+The following samples show public client mobile applications that access the Microsoft Graph API. These client applications use the Microsoft Authentication Library (MSAL).
> [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample(s) <br/> on GitHub |Auth<br/> libraries |Auth flow |
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Existing Legacy clients (Exchange ActiveSync, Outlook 2010/2013) aren't affected
Modern authentication clients (Office 2016 and Office 2013, iOS, and Android apps) use a valid refresh token to obtain new access tokens for continued access to resources instead of returning to AD FS. These clients are immune to any password prompts resulting from the domain conversion process. The clients will continue to function without extra configuration.
+>[!NOTE]
+>When you migrate from federated to cloud authentication, the process to convert the domain from federated to managed may take up to 60 minutes. During this process, users might not be prompted for credentials for any new logins to Azure portal or other browser based applications protected with Azure AD. We recommend that you include this delay in your maintenance window.
+ ### Plan for rollback > [!TIP]
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Previously updated : 04/15/2022 Last updated : 08/16/2022
Risk can be detected at the **User** and **Sign-in** level and two types of dete
A sign-in risk represents the probability that a given authentication request isn't authorized by the identity owner. Risky activity can be detected for a user that isn't linked to a specific malicious sign-in but to the user itself.
-Real-time detections may not show up in reporting for five to 10 minutes. Offline detections may not show up in reporting for 48 hours.
+Real-time detections may not show up in reporting for 5 to 10 minutes. Offline detections may not show up in reporting for 48 hours.
> [!NOTE]
-> Our system may detect that the risk event that contributed to the risk user risk score was a false positives or the user risk was remediated with policy enforcement such as completing multi-factor authentication or secure password change. Therefore our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will surface and it will no longer contribute to the userΓÇÖs risk.
+> Our system may detect that the risk event that contributed to the risk user risk score was either:
+>
+> - A false positive
+> - The [user risk was remediated](howto-identity-protection-remediate-unblock.md) by policy by either:
+> - Completing multifactor authentication
+> - Secure password change.
+>
+> Our system will dismiss the risk state and a risk detail of ΓÇ£AI confirmed sign-in safeΓÇ¥ will show and no longer contribute to the userΓÇÖs overall risk.
### Premium detections
-Premium detections are visible only to Azure AD Premium P2 customers. Customers without Azure AD Premium P2 licenses still receives the premium detections but they'll be titled "additional risk detected".
-
+Premium detections are visible only to Azure AD Premium P2 customers. Customers without Azure AD Premium P2 licenses still receive the premium detections but they'll be titled "additional risk detected".
### Sign-in risk
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Risk detection | Detection type | Description | | | | |
-| Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. Among several other factors, this machine learning algorithm takes into account the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second, indicating that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. |
-| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. <br><br> **NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this as an indicator of potential token replay. |
+| Atypical travel | Offline | This risk detection type identifies two sign-ins originating from geographically distant locations, where at least one of the locations may also be atypical for the user, given past behavior. The algorithm takes into account multiple factors including the time between the two sign-ins and the time it would have taken for the user to travel from the first location to the second. This risk may indicate that a different user is using the same credentials. <br><br> The algorithm ignores obvious "false positives" contributing to the impossible travel conditions, such as VPNs and locations regularly used by other users in the organization. The system has an initial learning period of the earliest of 14 days or 10 logins, during which it learns a new user's sign-in behavior. |
+| Anomalous Token | Offline | This detection indicates that there are abnormal characteristics in the token such as an unusual token lifetime or a token that is played from an unfamiliar location. This detection covers Session Tokens and Refresh Tokens. <br><br> **NOTE:** Anomalous token is tuned to incur more noise than other detections at the same risk level. This tradeoff is chosen to increase the likelihood of detecting replayed tokens that may otherwise go unnoticed. Because this is a high noise detection, there's a higher than normal chance that some of the sessions flagged by this detection are false positives. We recommend investigating the sessions flagged by this detection in the context of other sign-ins from the user. If the location, application, IP address, User Agent, or other characteristics are unexpected for the user, the tenant admin should consider this risk as an indicator of potential token replay. |
| Token Issuer Anomaly | Offline |This risk detection indicates the SAML token issuer for the associated SAML token is potentially compromised. The claims included in the token are unusual or match known attacker patterns. |
-| Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection is determined by correlating IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.|
+| Malware linked IP address | Offline | This risk detection type indicates sign-ins from IP addresses infected with malware that is known to actively communicate with a bot server. This detection matches the IP addresses of the user's device against IP addresses that were in contact with a bot server while the bot server was active. <br><br> **[This detection has been deprecated](../fundamentals/whats-new-archive.md#planned-deprecationmalware-linked-ip-address-detection-in-identity-protection)**. Identity Protection will no longer generate new "Malware linked IP address" detections. Customers who currently have "Malware linked IP address" detections in their tenant will still be able to view, remediate, or dismiss them until the 90-day detection retention time is reached.|
| Suspicious browser | Offline | Suspicious browser detection indicates anomalous behavior based on suspicious sign-in activity across multiple tenants from different countries in the same browser. | | Unfamiliar sign-in properties | Real-time |This risk detection type considers past sign-in history to look for anomalous sign-ins. The system stores information about previous sign-ins, and triggers a risk detection when a sign-in occurs with properties that are unfamiliar to the user. These properties can include IP, ASN, location, device, browser, and tenant IP subnet. Newly created users will be in "learning mode" period where the unfamiliar sign-in properties risk detection will be turned off while our algorithms learn the user's behavior. The learning mode duration is dynamic and depends on how much time it takes the algorithm to gather enough information about the user's sign-in patterns. The minimum duration is five days. A user can go back into learning mode after a long period of inactivity. <br><br> We also run this detection for basic authentication (or legacy protocols). Because these protocols don't have modern properties such as client ID, there's limited telemetry to reduce false positives. We recommend our customers to move to modern authentication. <br><br> Unfamiliar sign-in properties can be detected on both interactive and non-interactive sign-ins. When this detection is detected on non-interactive sign-ins, it deserves increased scrutiny due to the risk of token replay attacks. | | Malicious IP address | Offline | This detection indicates sign-in from a malicious IP address. An IP address is considered malicious based on high failure rates because of invalid credentials received from the IP address or other IP reputation sources. |
-| Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection profiles your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate that the user's account is compromised, that messages are being intentionally hidden, and that the mailbox is being used to distribute spam or malware in your organization. |
+| Suspicious inbox manipulation rules | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-manipulation-rules). This detection looks at your environment and triggers alerts when suspicious rules that delete or move messages or folders are set on a user's inbox. This detection may indicate: a user's account is compromised, messages are being intentionally hidden, and the mailbox is being used to distribute spam or malware in your organization. |
| Password spray | Offline | A password spray attack is where multiple usernames are attacked using common passwords in a unified brute force manner to gain unauthorized access. This risk detection is triggered when a password spray attack has been successfully performed. For example, the attacker is successfully authenticated, in the detected instance. |
-| Impossible travel | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies two user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it would have taken the user to travel from the first location to the second, indicating that a different user is using the same credentials. |
+| Impossible travel | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#impossible-travel). This detection identifies user activities (is a single or multiple sessions) originating from geographically distant locations within a time period shorter than the time it takes to travel from the first location to the second. This risk may indicate that a different user is using the same credentials. |
| New country | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-infrequent-country). This detection considers past activity locations to determine new and infrequent locations. The anomaly detection engine stores information about previous locations used by users in the organization. | | Activity from anonymous IP address | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#activity-from-anonymous-ip-addresses). This detection identifies that users were active from an IP address that has been identified as an anonymous proxy IP address. | | Suspicious inbox forwarding | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/cloud-app-security/anomaly-detection-policy#suspicious-inbox-forwarding). This detection looks for suspicious email forwarding rules, for example, if a user created an inbox rule that forwards a copy of all emails to an external address. |
-| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection profiles your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information|
+| Mass Access to Sensitive Files | Offline | This detection is discovered by [Microsoft Defender for Cloud Apps](/defender-cloud-apps/investigate-anomaly-alerts#unusual-file-access-by-user). This detection looks at your environment and triggers alerts when users access multiple files from Microsoft SharePoint or Microsoft OneDrive. An alert is triggered only if the number of accessed files is uncommon for the user and the files might contain sensitive information|
#### Nonpremium sign-in risk detections | Risk detection | Detection type | Description | | | | | | Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. |
-| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their login telemetry (IP address, location, device, and so on) for potentially malicious intent. |
+| Anonymous IP address | Real-time | This risk detection type indicates sign-ins from an anonymous IP address (for example, Tor browser or anonymous VPN). These IP addresses are typically used by actors who want to hide their sign-in information (IP address, location, device, and so on) for potentially malicious intent. |
| Admin confirmed user compromised | Offline | This detection indicates an admin has selected 'Confirm user compromised' in the Risky users UI or using riskyUsers API. To see which admin has confirmed this user compromised, check the user's risk history (via UI or API). |
-| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
+| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. |
### User-linked detections
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| Risk detection | Detection type | Description | | | | | | Possible attempt to access Primary Refresh Token (PRT) | Offline | This risk detection type is detected by Microsoft Defender for Endpoint (MDE). A Primary Refresh Token (PRT) is a key artifact of Azure AD authentication on Windows 10, Windows Server 2016, and later versions, iOS, and Android devices. A PRT is a JSON Web Token (JWT) that's specially issued to Microsoft first-party token brokers to enable single sign-on (SSO) across the applications used on those devices. Attackers can attempt to access this resource to move laterally into an organization or perform credential theft. This detection will move users to high risk and will only fire in organizations that have deployed MDE. This detection is low-volume and will be seen infrequently by most organizations. However, when it does occur it's high risk and users should be remediated. |
-| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior for users is assessed for anomalies based on an action or sequence of actions occurring for the account, along with any sign-in risk detected. |
+| Anomalous user activity | Offline | This risk detection indicates that suspicious patterns of activity have been identified for an authenticated user. The post-authentication behavior of users is assessed for anomalies. This behavior is based on actions occurring for the account, along with any sign-in risk detected. |
#### Nonpremium user risk detections
Premium detections are visible only to Azure AD Premium P2 customers. Customers
| | | | | Additional risk detected | Real-time or Offline | This detection indicates that one of the premium detections was detected. Since the premium detections are visible only to Azure AD Premium P2 customers, they're titled "additional risk detected" for customers without Azure AD Premium P2 licenses. | | Leaked credentials | Offline | This risk detection type indicates that the user's valid credentials have been leaked. When cybercriminals compromise valid passwords of legitimate users, they often share those credentials. This sharing is typically done by posting publicly on the dark web, paste sites, or by trading and selling the credentials on the black market. When the Microsoft leaked credentials service acquires user credentials from the dark web, paste sites, or other sources, they're checked against Azure AD users' current valid credentials to find valid matches. For more information about leaked credentials, see [Common questions](#common-questions). |
-| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the given user or is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. |
+| Azure AD threat intelligence | Offline | This risk detection type indicates user activity that is unusual for the user or consistent with known attack patterns. This detection is based on Microsoft's internal and external threat intelligence sources. |
## Common questions
Premium detections are visible only to Azure AD Premium P2 customers. Customers
Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
-While Microsoft doesn't provide specific details about how risk is calculated, we'll say that each level brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
+Microsoft doesn't provide specific details about how risk is calculated. Each level of risk brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
### Password hash synchronization
Risk detections like leaked credentials require the presence of password hashes
### Why are there risk detections generated for disabled user accounts?
-Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. That is why, Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
+Disabled user accounts can be re-enabled. If the credentials of a disabled account are compromised, and the account gets re-enabled, bad actors might use those credentials to gain access. Identity Protection generates risk detections for suspicious activities against disabled user accounts to alert customers about potential account compromise. If an account is no longer in use and wont be re-enabled, customers should consider deleting it to prevent compromise. No risk detections are generated for deleted accounts.
### Leaked credentials
Microsoft finds leaked credentials in various places, including:
Leaked credentials are processed anytime Microsoft finds a new, publicly available batch. Because of the sensitive nature, the leaked credentials are deleted shortly after processing. Only new leaked credentials found after you enable password hash synchronization (PHS) will be processed against your tenant. Verifying against previously found credential pairs isn't done.
-#### I have not seen any leaked credential risk events for quite some time?
+#### I haven't seen any leaked credential risk events for quite some time?
If you haven't seen any leaked credential risk events, it is because of the following reasons:
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure IoT Hub | [IoT Hub support for virtual networks with Private Link and Managed Identity](../../iot-hub/virtual-network-support.md) | | Azure Kubernetes Service (AKS) | [Use managed identities in Azure Kubernetes Service](../../aks/use-managed-identity.md) |
+| Azure Load Testing | [Use managed identities for Azure Load Testing](../../load-testing/how-to-use-a-managed-identity.md) |
| Azure Logic Apps | [Authenticate access to Azure resources using managed identities in Azure Logic Apps](../../logic-apps/create-managed-service-identity.md) | | Azure Log Analytics cluster | [Azure Monitor customer-managed key](../../azure-monitor/logs/customer-managed-keys.md) | Azure Machine Learning Services | [Use Managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md?tabs=python) |
active-directory Ideagen Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ideagen-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Ideagen Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Ideagen Cloud to support provisioning with Azure AD
-1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu.
+1. Login to [Ideagen Home](https://cktenant-homev2-scimtest1.ideagenhomedev.com). Click on the **Administration** icon to show the left hand side menu.
![Screenshot of administration menu.](media\ideagen-cloud-provisioning-tutorial\admin.png)
-2. Navigate to **Authentication** page under the **Manage tenant** sub menu.
+1. Navigate to **Authentication** page under the **Manage tenant** sub menu.
![Screenshot of authentication page.](media\ideagen-cloud-provisioning-tutorial\authentication.png)
-3. Scroll down in the Authentication page to **Client Token** section and click on **Regenerate**.
+1. Click on Edit button and select **Enabled** checkbox under automatic provisioning.
+
+ ![Screenshot of allow provisioning.](media\ideagen-cloud-provisioning-tutorial\allow-provisioning.png)
+
+1. Click on **Save** button to save the changes.
+
+1. Scroll down in the Authentication Page to **Client Token** section and click on **Regenerate** .
![Screenshot of token generation.](media\ideagen-cloud-provisioning-tutorial\generate-token.png)
-4. **Copy** and save the Bearer Token. This value will be entered in the Secret Token * field in the Provisioning tab of your Ideagen Cloud application in the Azure portal.
+1. **Copy** and save the Bearer Token. This value will be entered in the Secret Token * field in the Provisioning tab of your Ideagen Cloud application in the Azure portal.
![Screenshot of copying token.](media\ideagen-cloud-provisioning-tutorial\copy-token.png)
+1. Locate the **SCIM URL** and keep the value for later use. This value will be used as Tenant URL when configuring automatic user provisioning in Azure portal.
+ ## Step 3. Add Ideagen Cloud from the Azure AD application gallery Add Ideagen Cloud from the Azure AD application gallery to start managing provisioning to Ideagen Cloud. If you have previously setup Ideagen Cloud for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
Title: Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview) description: Learn how to use a custom certificate authority (CA) in an Azure Kubernetes Service (AKS) cluster. --++ Last updated 4/12/2022
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 07/21/2022- Last updated : 08/12/2022+ # Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
- Building event-driven apps with pub-sub - Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a VM)
-By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster, you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
+[By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster](../azure-arc/kubernetes/conceptual-extensions.md), you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments.
> [!NOTE] > If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page.
Azure + open source components are supported. Alpha and beta components are supp
### Clouds/regions
-Global Azure cloud is supported with Arc support on the regions listed by [Azure Products by Region][supported-cloud-regions].
+Global Azure cloud is supported with Arc support on the following regions:
+
+| Region | AKS support | Arc for Kubernetes support |
+| | -- | -- |
+| `australiaeast` | :heavy_check_mark: | :heavy_check_mark: |
+| `australiasoutheast` | :heavy_check_mark: | :x: |
+| `canadacentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `canadaeast` | :heavy_check_mark: | :heavy_check_mark: |
+| `centralindia` | :heavy_check_mark: | :heavy_check_mark: |
+| `centralus` | :heavy_check_mark: | :heavy_check_mark: |
+| `eastasia` | :heavy_check_mark: | :heavy_check_mark: |
+| `eastus` | :heavy_check_mark: | :heavy_check_mark: |
+| `eastus2` | :heavy_check_mark: | :heavy_check_mark: |
+| `eastus2euap` | :x: | :heavy_check_mark: |
+| `francecentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `germanywestcentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `japaneast` | :heavy_check_mark: | :heavy_check_mark: |
+| `koreacentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `northcentralus` | :heavy_check_mark: | :heavy_check_mark: |
+| `northeurope` | :heavy_check_mark: | :heavy_check_mark: |
+| `norwayeast` | :heavy_check_mark: | :x: |
+| `southafricanorth` | :heavy_check_mark: | :x: |
+| `southcentralus` | :heavy_check_mark: | :heavy_check_mark: |
+| `southeastasia` | :heavy_check_mark: | :heavy_check_mark: |
+| `swedencentral` | :heavy_check_mark: | :heavy_check_mark: |
+| `switzerlandnorth` | :heavy_check_mark: | :heavy_check_mark: |
+| `uksouth` | :heavy_check_mark: | :heavy_check_mark: |
+| `westcentralus` | :heavy_check_mark: | :heavy_check_mark: |
+| `westeurope` | :heavy_check_mark: | :heavy_check_mark: |
+| `westus` | :heavy_check_mark: | :heavy_check_mark: |
+| `westus2` | :heavy_check_mark: | :heavy_check_mark: |
+| `westus3` | :heavy_check_mark: | :heavy_check_mark: |
+ ## Prerequisites
aks Enable Fips Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md
Title: Enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools description: Learn how to enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools.--++ Last updated 07/19/2022
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
Title: Create an ingress controller in Azure Kubernetes Service (AKS) description: Learn how to create and configure an ingress controller in an Azure Kubernetes Service (AKS) cluster.--++ Last updated 05/17/2022
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster. --++ Last updated 05/18/2022
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
aks Use Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md
Title: Use labels in an Azure Kubernetes Service (AKS) cluster description: Learn how to use labels in an Azure Kubernetes Service (AKS) cluster.--++ Last updated 03/03/2022
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Data sources and connectors shown in Get Data or Table Import Wizard in Visual S
## Other data sources
+Connecting to on-premises data sources from an Azure Analysis Services server require an [On-premises gateway](analysis-services-gateway.md). When using a gateway, 64-bit providers are required.
+ |Data source | In-memory | DirectQuery |Notes | | | | | | |Access Database | Yes | No | |
Data sources and connectors shown in Get Data or Table Import Wizard in Visual S
|Analysis Services | Yes | No | | |Analytics Platform System | Yes | No | | |CSV file |Yes | No | |
-|Dynamics 365 | Yes | No | <sup>[6](#tab1400b)</sup> |
+|Dynamics 365 | Yes | No | <sup>[6](#tab1400b)</sup>, <sup>[12](#tds)</sup> |
|Excel workbook | Yes | No | | |Exchange | Yes | No | <sup>[6](#tab1400b)</sup> | |Folder |Yes | No | <sup>[6](#tab1400b)</sup> |
Data sources and connectors shown in Get Data or Table Import Wizard in Visual S
<a name="instgw">8</a> - If specifying MSOLEDBSQL as the data provider, it may be necessary to download and install the [Microsoft OLE DB Driver for SQL Server](/sql/connect/oledb/oledb-driver-for-sql-server) on the same computer as the On-premises data gateway. <a name="oracle">9</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Oracle Data Provider for .NET. If specified as a structured data source, be sure to [enable Oracle managed provider](#enable-oracle-managed-provider). <a name="teradata">10</a> - For tabular 1200 models, or as a *provider* data source in tabular 1400+ models, specify Teradata Data Provider for .NET.
-<a name="filesSP">11</a> - Files in on-premises SharePoint are not supported.
-
-Connecting to on-premises data sources from an Azure Analysis Services server require an [On-premises gateway](analysis-services-gateway.md). When using a gateway, 64-bit providers are required.
+<a name="filesSP">11</a> - Files in on-premises SharePoint are not supported.
+<a name="tds">12</a> - Azure Analysis Services does not support direct connections to the Dynamics 365 [Dataverse TDS endpoint](/power-apps/developer/data-platform/dataverse-sql-query). When connecting to this data source from Azure Analysis Services, you must use an On-premises Data Gateway, and refresh the tokens manually.
## Understanding providers
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Title: Implement disaster recovery using backup and restore in API Management
+ Title: Backup and restore your Azure API Management instance for disaster recovery
-description: Learn how to use backup and restore to perform disaster recovery in Azure API Management.
+description: Learn how to use backup and restore operations in Azure API Management to carry out your disaster recovery strategy.
Previously updated : 10/03/2021- Last updated : 07/27/2022+
By publishing and managing your APIs via Azure API Management, you're taking advantage of fault tolerance and infrastructure capabilities that you'd otherwise design, implement, and manage manually. The Azure platform mitigates a large fraction of potential failures at a fraction of the cost.
-To recover from availability problems that affect the region that hosts your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The service backup and restore features provide the necessary building blocks for implementing disaster recovery strategy.
+To recover from availability problems that affect your API Management service, be ready to reconstitute your service in another region at any time. Depending on your recovery time objective, you might want to keep a standby service in one or more regions. You might also try to maintain their configuration and content in sync with the active service according to your recovery point objective. The API management backup and restore capabilities provide the necessary building blocks for implementing disaster recovery strategy.
Backup and restore operations can also be used for replicating API Management service configuration between operational environments, for example, development and staging. Beware that runtime data such as users and subscriptions will be copied as well, which might not always be desirable.
-This guide shows how to automate backup and restore operations and how to ensure successful authenticating of backup and restore requests by Azure Resource Manager.
+This article shows how to automate backup and restore operations of your API Management instance using an external storage account. The steps shown here use either the [Backup-AzApiManagement](/powershell/module/az.apimanagement/backup-azapimanagement) and [Restore-AzApiManagement](/powershell/module/az.apimanagement/restore-azapimanagement) Azure PowerShell cmdlets, or the [Api Management Service - Backup](/rest/api/apimanagement/current-ga/api-management-service/backup) and [Api Management Service - Restore](/rest/api/apimanagement/current-ga/api-management-service/restore) REST APIs.
-> [!IMPORTANT]
-> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change.
->
-> Backup operation does not capture pre-aggregated log data used in reports shown on the **Analytics** blade in the Azure portal.
> [!WARNING] > Each backup expires after 30 days. If you attempt to restore a backup after the 30-day expiration period has expired, the restore will fail with a `Cannot restore: backup expired` message.
+> [!IMPORTANT]
+> Restore operation doesn't change custom hostname configuration of the target service. We recommend to use the same custom hostname and TLS certificate for both active and standby services, so that, after restore operation completes, the traffic can be re-directed to the standby instance by a simple DNS CNAME change.
++ [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] [!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)]
-## Authenticating Azure Resource Manager requests
+## Prerequisites
-> [!IMPORTANT]
-> The REST API for backup and restore uses Azure Resource Manager and has a different authentication mechanism than the REST APIs for managing your API Management entities. The steps in this section describe how to authenticate Azure Resource Manager requests. For more information, see [Authenticating Azure Resource Manager requests](/rest/api/azure).
+* An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md).
+* An Azure storage account. If you don't have one, see [Create a storage account](../storage/common/storage-account-create.md).
+ * [Create a container](/storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in the storage account to hold the backup data.
+
+* The latest version of Azure PowerShell, if you plan to use Azure PowerShell cmdlets. If you haven't already, [install Azure PowerShell](/powershell/azure/install-az-ps).
-All of the tasks that you do on resources using the Azure Resource Manager must be authenticated with Azure Active Directory using the following steps:
+## Configure storage account access
+When running a backup or restore operation, you need to configure access to the storage account. API Management supports two storage access mechanisms: an Azure Storage access key, or an API Management managed identity.
-- Add an application to the Azure Active Directory tenant.-- Set permissions for the application that you added.-- Get the token for authenticating requests to Azure Resource Manager.
+### Configure storage account access key
-### Create an Azure Active Directory application
+Azure generates two 512-bit storage account access keys for each storage account. These keys can be used to authorize access to data in your storage account via Shared Key authorization. To view, retrieve, and manage the keys, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal).
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Using the subscription that contains your API Management service instance, navigate to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) to register an app in Active Directory.
- > [!NOTE]
- > If the Azure Active Directory default directory isn't visible to your account, contact the administrator of the Azure subscription to grant the required permissions to your account.
-1. Select **+ New registration**.
-1. On the **Register an application** page, set the values as follows:
-
- * Set **Name** to a meaningful name.
- * Set **Supported account types** to **Accounts in this organizational directory only**.
- * In **Redirect URI** enter a placeholder URL such as `https://resources`. It's a required field, but the value isn't used later.
- * Select **Register**.
+### Configure API Management managed identity
-### Add permissions
+> [!NOTE]
+> Using an API Management managed identity for storage operations during backup and restore is supported in API Management REST API version `2021-04-01-preview` or later.
-1. Once the application is created, select **API permissions** > **+ Add a permission**.
-1. Select **Microsoft APIs**.
-1. Select **Azure Service Management**.
+1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
- :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/add-app-permission.png" alt-text="Screenshot that shows how to add app permissions.":::
+ * If you enable a user-assigned managed identity, take note of the identity's **Client ID**.
+ * If you will back up and restore to different API Management instances, enable a managed identity in both the source and target instances.
+1. Assign the identity the **Storage Blob Data Contributor** role, scoped to the storage account used for backup and restore. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools.
-1. Click **Delegated Permissions** beside the newly added application, and check the box for **Access Azure Service Management as organization users (preview)**.
- :::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/delegated-app-permission.png" alt-text="Screenshot that shows adding delegated app permissions.":::
+## Back up an API Management service
-1. Select **Add permissions**.
+### [PowerShell](#tab/powershell)
-### Configure your app
+[Sign in](/powershell/azure/authenticate-azureps) with Azure PowerShell.
-Before calling the APIs that generate the backup and restore, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token.
+In the following examples:
-> [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../active-directory/develop/msal-migration.md) for more details.
+* An API Management instance named *myapim* is in resource group *apimresourcegroup*.
+* A storage account named *backupstorageaccount* is in resource group *storageresourcegroup*. The storage account has a container named *backups*.
+* A backup blob will be created with name *ContosoBackup.apimbackup*.
-```csharp
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
-using System;
+Set variables in PowerShell:
-namespace GetTokenResourceManagerRequests
-{
- class Program
- {
- static void Main(string[] args)
- {
- var authenticationContext = new AuthenticationContext("https://login.microsoftonline.com/{tenant id}");
- var result = authenticationContext.AcquireTokenAsync("https://management.azure.com/", "{application id}", new Uri("{redirect uri}"), new PlatformParameters(PromptBehavior.Auto)).Result;
-
- if (result == null) {
- throw new InvalidOperationException("Failed to obtain the JWT token");
- }
-
- Console.WriteLine(result.AccessToken);
-
- Console.ReadLine();
- }
- }
-}
+```powershell
+$apiManagementName="myapim";
+$apiManagementResourceGroup="apimresourcegroup";
+$storageAccountName="backupstorageaccount";
+$storageResourceGroup="storageresourcegroup";
+$containerName="backups";
+$blobName="ContosoBackup.apimbackup"
```
-Replace `{tenant id}`, `{application id}`, and `{redirect uri}` using the following instructions:
-
-1. Replace `{tenant id}` with the tenant ID of the Azure Active Directory application you created. You can access the ID by clicking **App registrations** -> **Endpoints**.
-
- ![Endpoints][api-management-endpoint]
+### Access using storage access key
-2. Replace `{application id}` with the value you get by navigating to the **Settings** page.
-3. Replace the `{redirect uri}` with the value from the **Redirect URIs** tab of your Azure Active Directory application.
+```powershell
+$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageResourceGroup -StorageAccountName $storageAccountName)[0].Value
- Once the values are specified, the code example should return a token similar to the following example:
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
- ![Token][api-management-arm-token]
+Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -TargetContainerName $containerName -TargetBlobName $blobName
+```
- > [!NOTE]
- > The token may expire after a certain period. Execute the code sample again to generate a new token.
+### Access using managed identity
-## Accessing Azure Storage
-API Management uses an Azure Storage account that you specify for backup and restore operations. When running a backup or restore operation, you need to configure access to the storage account. API Management supports two storage access mechanisms: an Azure Storage access key (the default), or an API Management managed identity.
+To configure a managed identity in your API Management instance to access the storage account, see [Configure a managed identity](#configure-api-management-managed-identity), earlier in this article.
-### Configure storage account access key
+#### Access using system-assigned managed identity
-For steps, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md?tabs=azure-portal).
+```powershell
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName
-### Configure API Management managed identity
+Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -TargetContainerName $containerName `
+ -TargetBlobName $blobName -AccessType "SystemAssignedManagedIdentity"
+```
-> [!NOTE]
-> Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later.
+#### Access using user-assigned managed identity
-1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
+In this example, a user-assigned managed identity named *myidentity* is in resource group *identityresourcegroup*.
- * If you enable a user-assigned managed identity, take note of the identity's **Client ID**.
- * If you will back up and restore to different API Management instances, enable a managed identity in both the source and target instances.
-1. Assign the identity the **Storage Blob Data Contributor** role, scoped to the storage account used for backup and restore. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools.
+```powershell
+$identityName = "myidentity";
+$identityResourceGroup = "identityresourcegroup";
-## Calling the backup and restore operations
+$identityId = (Get-AzUserAssignedIdentity -Name $identityName -ResourceGroupName $identityResourceGroup).ClientId
-The REST APIs are [Api Management Service - Backup](/rest/api/apimanagement/current-ga/api-management-service/backup) and [Api Management Service - Restore](/rest/api/apimanagement/current-ga/api-management-service/restore).
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName
-> [!NOTE]
-> Backup and restore operations can also be performed with PowerShell [_Backup-AzApiManagement_](/powershell/module/az.apimanagement/backup-azapimanagement) and [_Restore-AzApiManagement_](/powershell/module/az.apimanagement/restore-azapimanagement) commands respectively.
+Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -TargetContainerName $containerName `
+ -TargetBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid
+```
-Before calling the "backup and restore" operations described in the following sections, set the authorization request header for your REST call.
+Backup is a long-running operation that may take several minutes to complete.
-```csharp
-request.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + token);
-```
+### [REST](#tab/rest)
-### <a name="step1"> </a>Back up an API Management service
+See [Azure REST API reference](/rest/api/azure/) for information about authenticating and calling Azure REST APIs.
-To back up an API Management service issue the following HTTP request:
+To back up an API Management service, issue the following HTTP request:
```http POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ApiManagement/service/{serviceName}/backup?api-version={api-version}
where:
- `subscriptionId` - ID of the subscription that holds the API Management service you're trying to back up - `resourceGroupName` - name of the resource group of your Azure API Management service - `serviceName` - the name of the API Management service you're making a backup of specified at the time of its creation-- `api-version` - a valid REST API version such as `2020-12-01` or `2021-04-01-preview`.
+- `api-version` - a valid REST API version such as `2021-08-01` or `2021-04-01-preview`.
In the body of the request, specify the target storage account name, blob container name, backup name, and the storage access type. If the storage container doesn't exist, the backup operation creates it.
-#### Access using storage access key
+### Access using storage access key
```json {
In the body of the request, specify the target storage account name, blob contai
} ```
-#### Access using managed identity
+### Access using managed identity
> [!NOTE] > Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later.
-**Access using system-assigned managed identity**
+#### Access using system-assigned managed identity
```json {
In the body of the request, specify the target storage account name, blob contai
} ```
-**Access using user-assigned managed identity**
+#### Access using user-assigned managed identity
```json {
In the body of the request, specify the target storage account name, blob contai
Set the value of the `Content-Type` request header to `application/json`.
-Backup is a long-running operation that may take more than a minute to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation.
+Backup is a long-running operation that may take several minutes to complete. If the request succeeded and the backup process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the backup is in progress, you continue to receive a `202 Accepted` status code. A Response code of `200 OK` indicates successful completion of the backup operation.
-### <a name="step2"> </a>Restore an API Management service
++
+## Restore an API Management service
+
+> [!CAUTION]
+> Avoid changes to the service configuration (for example, APIs, policies, developer portal appearance) while restore operation is in progress. Changes **could be overwritten**.
+
+### [PowerShell](#tab/powershell)
+
+In the following examples,
+
+* An API Management instance named *myapim* is restored from the backup blob named *ContosoBackup.apimbackup* in storage account *backupstorageaccount*.
+* The backup blob is in a container named *backups*.
+
+Set variables in PowerShell:
+
+```powershell
+$apiManagementName="myapim";
+$apiManagementResourceGroup="apimresourcegroup";
+$storageAccountName="backupstorageaccount";
+$storageResourceGroup="storageresourcegroup";
+$containerName="backups";
+$blobName="ContosoBackup.apimbackup;
+```
+
+### Access using storage access key
+
+```powershell
+$storageKey = (Get-AzStorageAccountKey -ResourceGroupName $storageResourceGroup -StorageAccountName $storageAccountName)[0].Value
+
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey$st
+
+Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -SourceContainerName $containerName -SourceBlobName $blobName
+```
+
+### Access using managed identity
+
+To configure a managed identity in your API Management instance to access the storage account, see [Configure a managed identity](#configure-api-management-managed-identity), earlier in this article.
+
+#### Access using system-assigned managed identity
+
+```powershell
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName
+
+Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -SourceContainerName $containerName `
+ -SourceBlobName $blobName -AccessType "SystemAssignedManagedIdentity"
+```
+
+#### Access using user-assigned managed identity
+
+In this example, a user-assigned managed identity named *myidentity* is in resource group *identityresourcegroup*.
+
+```powershell
+$identityName = "myidentity";
+$identityResourceGroup = "identityresourcegroup";
+
+$identityId = (Get-AzUserAssignedIdentity -Name $identityName -ResourceGroupName $identityResourceGroup).ClientId
+
+$storageContext = New-AzStorageContext -StorageAccountName $storageAccountName
+
+Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $apiManagementName `
+ -StorageContext $storageContext -SourceContainerName $containerName `
+ -SourceBlobName $blobName -AccessType "UserAssignedManagedIdentity" ` -identityClientId $identityid
+```
+
+Restore is a long-running operation that may take up to 45 minutes or more to complete.
+
+### [REST](#tab/rest)
To restore an API Management service from a previously created backup, make the following HTTP request:
where:
- `subscriptionId` - ID of the subscription that holds the API Management service you're restoring a backup into - `resourceGroupName` - name of the resource group that holds the Azure API Management service you're restoring a backup into - `serviceName` - the name of the API Management service being restored into specified at its creation time-- `api-version` - a valid REST API version such as `2020-12-01` or `2021-04-01-preview`
+- `api-version` - a valid REST API version such as `2021-08-01` or `2021-04-01-preview`
In the body of the request, specify the existing storage account name, blob container name, backup name, and the storage access type.
-#### Access using storage access key
+### Access using storage access key
```json {
In the body of the request, specify the existing storage account name, blob cont
} ```
-#### Access using managed identity
+### Access using managed identity
> [!NOTE] > Using an API Management managed identity for storage operations during backup and restore requires API Management REST API version `2021-04-01-preview` or later.
-**Access using system-assigned managed identity**
+#### Access using system-assigned managed identity
```json {
In the body of the request, specify the existing storage account name, blob cont
} ```
-**Access using user-assigned managed identity**
+#### Access using user-assigned managed identity
```json {
In the body of the request, specify the existing storage account name, blob cont
Set the value of the `Content-Type` request header to `application/json`.
-Restore is a long-running operation that may take up to 30 or more minutes to complete. If the request succeeded and the restore process began, you receive a `202 Accepted` response status code with a `Location` header. Make 'GET' requests to the URL in the `Location` header to find out the status of the operation. While the restore is in progress, you continue to receive a `202 Accepted` status code. A response code of `200 OK` indicates successful completion of the restore operation.
+Restore is a long-running operation that may take up to 30 or more minutes to complete. If the request succeeded and the restore process began, you receive a `202 Accepted` response status code with a `Location` header. Make `GET` requests to the URL in the `Location` header to find out the status of the operation. While the restore is in progress, you continue to receive a `202 Accepted` status code. A response code of `200 OK` indicates successful completion of the restore operation.
-> [!IMPORTANT]
-> **The SKU** of the service being restored into **must match** the SKU of the backed-up service being restored.
->
-> **Changes** made to the service configuration (for example, APIs, policies, developer portal appearance) while restore operation is in progress **could be overwritten**.
+
-## Constraints when making backup or restore request
+## Constraints
-- While backup is in progress, **avoid management changes in the service** such as SKU upgrade or downgrade, change in domain name, and more. - Restore of a **backup is guaranteed only for 30 days** since the moment of its creation.
+- While backup is in progress, **avoid management changes in the service** such as pricing tier upgrade or downgrade, change in domain name, and more.
- **Changes** made to the service configuration (for example, APIs, policies, and developer portal appearance) while backup operation is in process **might be excluded from the backup and will be lost**.--- [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) should **not** be enabled on the Blob Service in the Azure Storage Account.-- **The SKU** of the service being restored into **must match** the SKU of the backed-up service being restored.
+- Backup doesn't capture pre-aggregated log data used in reports shown on the **Analytics** window in the Azure portal.
+- [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) should **not** be enabled on the Blob service in the storage account.
+- **The pricing tier** of the service being restored into **must match** the pricing tier of the backed-up service being restored.
## Storage networking constraints ### Access using storage access key
-If the storage account is **[firewall][azure-storage-ip-firewall] enabled** and a storage key is used for access, then the customer must **Allow** the set of [Azure API Management control plane IP addresses][control-plane-ip-address] on their storage account for backup or restore to work. The storage account can be in any Azure region except the one where the API Management service is located. For example, if the API Management service is in West US, then the Azure Storage account can be in West US 2 and the customer needs to open the control plane IP 13.64.39.16 (API Management control plane IP of West US) in the firewall. This is because the requests to Azure Storage are not SNATed to a public IP from compute (Azure API Management control plane) in the same Azure region. Cross-region storage requests will be SNATed to the public IP address.
+If the storage account is **[firewall][azure-storage-ip-firewall] enabled** and a storage key is used for access, then the customer must **Allow** the set of [Azure API Management control plane IP addresses][control-plane-ip-address] on their storage account for backup or restore to work. The storage account can be in any Azure region except the one where the API Management service is located. For example, if the API Management service is in West US, then the Azure Storage account can be in West US 2 and the customer needs to open the control plane IP 13.64.39.16 (API Management control plane IP of West US) in the firewall. This is because the requests to Azure Storage aren't SNATed to a public IP from compute (Azure API Management control plane) in the same Azure region. Cross-region storage requests will be SNATed to the public IP address.
### Access using managed identity
If an API Management system-assigned managed identity is used to access a firewa
- [Protocols and ciphers](api-management-howto-manage-protocols-ciphers.md) settings. - [Developer portal](developer-portal-faq.md#is-the-portals-content-saved-with-the-backuprestore-functionality-in-api-management) content.
-The frequency with which you perform service backups affect your recovery point objective. To minimize it, we recommend implementing regular backups and performing on-demand backups after you make changes to your API Management service.
+The frequency with which you perform service backups affects your recovery point objective. To minimize it, we recommend implementing regular backups and performing on-demand backups after you make changes to your API Management service.
## Next steps
Check out the following related resources for the backup/restore process:
- [Automating API Management Backup and Restore with Logic Apps](https://github.com/Azure/api-management-samples/tree/master/tutorials/automating-apim-backup-restore-with-logic-apps) - [How to move Azure API Management across regions](api-management-howto-migrate.md)-
-API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
+- API Management **Premium** tier also supports [zone redundancy](../availability-zones/migrate-api-mgt.md), which provides resiliency and high availability to a service instance in a specific Azure region (location).
[backup an api management service]: #step1 [restore an api management service]: #step2
api-management Api Management Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-migrate.md
Last updated 08/20/2021
-#cusomerintent: As an Azure service administrator, I want to move my service resources to another Azure region.
+#customerintent: As an Azure service administrator, I want to move my service resources to another Azure region.
# How to move Azure API Management across regions
To move API Management instances from one Azure region to another, use the servi
### Option 1: Use a different API Management instance name 1. In the target region, create a new API Management instance with the same pricing tier as the source API Management instance. Use a different name for the new instance.
-1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account.
-1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance.
+1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#back-up-an-api-management-service) the existing API Management instance to the storage account.
+1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#restore-an-api-management-service) the source instance's backup to the new API Management instance.
1. If you have a custom domain pointing to the source region API Management instance, update the custom domain CNAME to point to the new API Management instance. ### Option 2: Use the same API Management instance name
To move API Management instances from one Azure region to another, use the servi
> [!WARNING] > This option deletes the original API Management instance and results in downtime during the migration. Ensure that you have a valid backup before deleting the source instance.
-1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account.
+1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#back-up-an-api-management-service) the existing API Management instance to the storage account.
1. Delete the API Management instance in the source region. 1. Create a new API Management instance in the target region with the same name as the one in the source region.
-1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance in the target region.
+1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#restore-an-api-management-service) the source instance's backup to the new API Management instance in the target region.
## Verify
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
Last updated 03/22/2022
ms.devlang: python-+ # Quickstart: Deploy a Python (Django or Flask) web app to Azure App Service
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
ms.devlang: python Last updated 03/09/2022-+ # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
automation Create Azure Automation Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md
+
+ Title: Quickstart - Create an Azure Automation account using the portal
+description: This quickstart helps you to create a new Automation account using Azure portal.
+ Last updated : 10/26/2021+++
+#Customer intent: As an administrator, I want to create an Automation account so that I can further use the Automation services.
++
+# Quickstart: Create an Automation account using the Azure portal
+
+You can create an Azure [Automation account](../automation-security-overview.md) using the Azure portal, a browser-based user interface allowing access to a number of resources. One Automation account can manage resources across all regions and subscriptions for a given tenant. This Quickstart guides you in creating an Automation account.
+
+## Prerequisites
+
+An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create Automation account
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the top menu, select **+ Create a resource**.
+
+1. Under **Categories**, select **IT & Management Tools**, and then select **Automation**.
+
+ :::image type="content" source="./media/create-account-portal/automation-account-portal.png" alt-text="Locating Automation accounts in portal.":::
+
+Options for your new Automation account are organized into tabs in the **Create an Automation Account** page. The following sections describe each of the tabs and their options.
+
+### Basics
+
+On the **Basics** tab, provide the essential information for your Automation account. After you complete the **Basics** tab, you can choose to further customize your new Automation account by setting options on the other tabs, or you can select **Review + create** to accept the default options and proceed to validate and create the account.
+
+> [!NOTE]
+> By default, a system-assigned managed identity is enabled for the Automation account.
+
+The following table describes the fields on the **Basics** tab.
+
+| **Field** | **Required**<br> **or**<br> **optional** |**Description** |
+||||
+|Subscription|Required |From the drop-down list, select the Azure subscription for the account.|
+|Resource group|Required |From the drop-down list, select your existing resource group, or select **Create new**.|
+|Automation account name|Required |Enter a name unique for its location and resource group. Names for Automation accounts that have been deleted might not be immediately available. You can't change the account name once it has been entered in the user interface. |
+|Region|Required |From the drop-down list, select a region for the account. For an updated list of locations that you can deploy an Automation account to, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=automation&regions=all).|
+
+The following image shows a standard configuration for a new Automation account.
++
+### Advanced
+
+On the **Advanced** tab, you can configure the managed identity option for your new Automation account. The user-assigned managed identity option can also be configured after the Automation account is created.
+
+For instructions on how to create a user-assigned managed identity, see [Create a user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).
+
+The following table describes the fields on the **Advanced** tab.
+
+| **Field** | **Required**<br> **or**<br> **optional** |**Description** |
+||||
+|System-assigned |Optional |An Azure Active Directory identity that is tied to the lifecycle of the Automation account. |
+|User-assigned |Optional |A managed identity represented as a standalone Azure resource that is managed separately from the resources that use it.|
+
+You can choose to enable managed identities later, and the Automation account is created without one. To enable a managed identity after the account is created, see [Enable managed identity](enable-managed-identity.md). If you select both options, for the user-assigned identity, select the **Add user assigned identities** option. On the **Select user assigned managed identity** page, select a subscription and add one or more user-assigned identities created in that subscription to assign to the Automation account.
+
+The following image shows a standard configuration for a new Automation account.
++
+### Networking
+
+On the **Networking** tab, you can connect to your automation account either publicly (via public IP addresses), or privately, using a private endpoint. The following image shows the connectivity configuration that you can define for a new automation account.
+
+- **Public Access** ΓÇô This default option provides a public endpoint for the Automation account that can receive traffic over the internet and does not require any additional configuration. However, we don't recommend it for private applications or secure environments. Instead, the second option **Private access**, a private Link mentioned below can be leveraged to restrict access to automation endpoints only from authorized virtual networks. Public access can simultaneously coexist with the private endpoint enabled on the Automation account. If you select public access while creating the Automation account, you can add a Private endpoint later from the Networking blade of the Automation Account.
+
+- **Private Access** ΓÇô This option provides a private endpoint for the Automation account that uses a private IP address from your virtual network. This network interface connects you privately and securely to the Automation account. You bring the service into your virtual network by enabling a private endpoint. This is the recommended configuration from a security point of view; however, this requires you to configure Hybrid Runbook Worker connected to an Azure virtual network & currently does not support cloud jobs.
++
+### Tags
+
+On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md).
+
+### Review + create tab
+
+When you navigate to the **Review + create** tab, Azure runs validation on the Automation account settings that you have chosen. If validation passes, you can proceed to create the Automation account.
+
+If validation fails, then the portal indicates which settings need to be modified.
+
+Review your new Automation account.
++
+## Clean up resources
+
+If you're not going to continue to use the Automation account, select **Delete** from the **Overview** page, and then select **Yes** when prompted.
+
+## Next steps
+
+In this Quickstart, you created an Automation account. To use managed identities with your Automation account, continue to the next Quickstart:
+
+> [!div class="nextstepaction"]
+> [Tutorial - Create Automation PowerShell runbook using managed identity](../learn/powershell-runbook-managed-identity.md)
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 #
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Use the Azure CLI for creating a new cache and geo-replication group, or to add
#### Create new Enterprise instance in a new geo-replication group using Azure CLI
-This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called `replicationGroup`:
+This example creates a new Azure Cache for Redis Enterprise E10 cache instance called _Cache1_ in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_:
```azurecli-interactive az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"
az redisenterprise create --location "East US" --cluster-name "Cache1" --sku "En
To configure active geo-replication properly, the ID of the cache instance being created must be added with the `--linked-databases` parameter. The ID is in the format:
-`/subscriptions/\<your-subscription-ID>/resourceGroups/\<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/\<your-cache-name>/databases/default`
+`/subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/<your-cache-name>/databases/default`
#### Create new Enterprise instance in an existing geo-replication group using Azure CLI This example creates a new Cache for Redis Enterprise E10 instance called _Cache2_ in the West US region. Then, the cache is added to the `replicationGroup` active geo-replication group created above. This way, it's linked in an active-active configuration with Cache1.
-<!-- love the simple, declarative sentences. I am once again add the full product name -->
```azurecli-interactive az redisenterprise create --location "West US" --cluster-name "Cache2" --sku "Enterprise_E10" --resource-group "myResourceGroup" --group-nickname "replicationGroup" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default" --linked-databases id="/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache2/databases/default"
Use Azure PowerShell to create a new cache and geo-replication group, or to add
#### Create new Enterprise instance in a new geo-replication group using PowerShell
-This example creates a new Azure Cache for Redis Enterprise E10 cache instance called "Cache1" in the East US region. Then, the cache is added to a new active geo-replication group called `replicationGroup`:
+This example creates a new Azure Cache for Redis Enterprise E10 cache instance called "Cache1" in the East US region. Then, the cache is added to a new active geo-replication group called _replicationGroup_:
```powershell-interactive New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "Enterprise_E10" -GroupNickname "replicationGroup" -LinkedDatabase '{id:"/subscriptions/34b6ecbd-ab5c-4768-b0b8-bf587aba80f6/resourceGroups/myResourceGroup/providers/Microsoft.Cache/redisEnterprise/Cache1/databases/default"}'
New-AzRedisEnterpriseCache -Name "Cache1" -ResourceGroupName "myResourceGroup" -
To configure active geo-replication properly, the ID of the cache instance being created must be added with the `-LinkedDatabase` parameter. The ID is in the format:
-`id:"/subscriptions/\<your-subscription-ID>/resourceGroups/\<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/\<your-cache-name>/databases/default`
+`/subscriptions/<your-subscription-ID>/resourceGroups/<your-resource-group-name>/providers/Microsoft.Cache/redisEnterprise/<your-cache-name>/databases/default`
#### Create new Enterprise instance in an existing geo-replication group using PowerShell
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md
Before deploying, you must define what is needed to run the model as a web servi
> > If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It may also transform the response before returning it to the client. >
- > By default when packaging for functions, the input is treated as text. If you are interested in consuming the raw bytes of the input (for instance for Blob triggers), you should use [AMLRequest to accept raw data](../machine-learning/how-to-deploy-advanced-entry-script.md#binary-data).
+ > By default when packaging for functions, the input is treated as text. If you are interested in consuming the raw bytes of the input (for instance for Blob triggers), you should use [AMLRequest to accept raw data](../machine-learning/v1/how-to-deploy-advanced-entry-script.md#binary-data).
For the run function, ensure it connects to a Redis endpoint.
When `show_output=True`, the output of the Docker build process is shown. Once t
Save the value for **username** and one of the **passwords**.
-1. If you don't already have a resource group or app service plan to deploy the service, the these commands demonstrate how to create both:
+1. If you don't already have a resource group or app service plan to deploy the service, these commands demonstrate how to create both:
```azurecli-interactive az group create --name myresourcegroup --location "West Europe"
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 06/21/2022 Last updated : 08/15/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: February 2022*
+*Last updated: August 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Notification Hubs](../../notification-hubs/index.yml) | &#x2705; | &#x2705; | | [Open Datasets](../../open-datasets/index.yml) | &#x2705; | &#x2705; | | [Peering Service](../../peering-service/index.yml) | &#x2705; | &#x2705; |
+| [Planned Maintenance for VMs](../../virtual-machines/maintenance-and-updates.md) | &#x2705; | &#x2705; |
| [Power Apps](/powerapps/) | &#x2705; | &#x2705; | | [Power Apps Portal](https://powerapps.microsoft.com/portals/) | &#x2705; | &#x2705; | | [Power Automate](/power-automate/) (formerly Microsoft Flow) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Public IP](../../virtual-network/ip-services/public-ip-addresses.md) | &#x2705; | &#x2705; | | [Resource Graph](../../governance/resource-graph/index.yml) | &#x2705; | &#x2705; | | **Service** | **FedRAMP High** | **DoD IL2** |
+| [Resource Mover](../../resource-mover/index.yml) | &#x2705; | &#x2705; |
+| [Route Server](../../route-server/index.yml) | &#x2705; | &#x2705; |
| [Scheduler](../../scheduler/index.yml) (replaced by [Logic Apps](../../logic-apps/index.yml)) | &#x2705; | &#x2705; | | [Service Bus](../../service-bus-messaging/index.yml) | &#x2705; | &#x2705; | | [Service Fabric](../../service-fabric/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;&ast;** FedRAMP High authorization for Azure Databricks is applicable to limited regions in Azure. To configure Azure Databricks for FedRAMP High use, contact your Microsoft or Databricks representative. ## Azure Government services by audit scope
-*Last updated: June 2022*
+*Last updated: August 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Sign-up portal](https://signup.azure.com/) | &#x2705; | &#x2705; | | | | | [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Stack Edge](../../databox-online/index.yml) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Video Indexer](../../azure-video-indexer/index.yml) | &#x2705; | &#x2705; | | | |
| [Azure Virtual Desktop](../../virtual-desktop/index.yml) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Backup](../../backup/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Bastion](../../bastion/index.yml) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
Title: Common alert schema for Azure Monitor alerts
-description: Understanding the common alert schema, why you should use it and how to enable it
+description: Understand the common alert schema, why you should use it, and how to enable it.
Last updated 03/14/2019
This article describes what the common alert schema is, the benefits of using it
## What is the common alert schema?
-The common alert schema standardizes the consumption experience for alert notifications in Azure today. Historically, the three alert types in Azure today (metric, log, and activity log) have had their own email templates, webhook schemas, etc. With the common alert schema, you can now receive alert notifications with a consistent schema.
+The common alert schema standardizes the consumption experience for alert notifications in Azure. Today, Azure has three alert types, metric, log, and activity log. Historically, they've had their own email templates and webhook schemas. With the common alert schema, you can now receive alert notifications with a consistent schema.
-Any alert instance describes **the resource that was affected** and **the cause of the alert**, and these instances are described in the common schema in the following sections:
+Any alert instance describes the resource that was affected and the cause of the alert. These instances are described in the common schema in the following sections:
-- **Essentials**: A set of **standardized fields**, common across all alert types, which describe **what resource** the alert is on along with additional common alert metadata (for example, severity or description).-- **Alert context**: A set of fields which describe the **cause of the alert**, with fields that vary **based on the alert type**. For example, a metric alert would have fields like the metric name and metric value in the alert context, whereas an activity log alert would have information about the event that generated the alert.
+- **Essentials**: Standardized fields, common across all alert types, describe what resource the alert is on along with other common alert metadata. Examples include severity or description.
+- **Alert context**: These fields describe the cause of the alert, with fields that vary based on the alert type. For example, a metric alert would have fields like the metric name and metric value in the alert context. An activity log alert would have information about the event that generated the alert.
-The typical integration scenarios we hear from customers involve the routing of the alert instance to the concerned team based on some pivot (for example, resource group), after which the responsible team starts working on it. With the common alert schema, you can have standardized routing logic across alert types by leveraging the essential fields, leaving the context fields as is for the concerned teams to investigate further.
+You might want to route the alert instance to a specific team based on a pivot such as a resource group. The common schema uses the essential fields to provide standardized routing logic for all alert types. The team can use the context fields for their investigation.
-This means that you can potentially have fewer integrations, making the process of managing and maintaining them a _much_ simpler task. Additionally, future alert payload enrichments (for example, customization, diagnostic enrichment, etc.) will only surface up in the common schema.
+As a result, you can potentially have fewer integrations, which makes the process of managing and maintaining them a much simpler task. Future alert payload enrichments like customization and diagnostic enrichment will only surface in the common schema.
## What enhancements does the common alert schema bring?
-The common alert schema will primarily manifest itself in your alert notifications. The enhancements that you will see are listed below:
+You'll see the benefits of using a common alert schema in your alert notifications. A common alert schema provides these benefits:
| Action | Enhancements| |:|:|
-| Email | A consistent and detailed email template, allowing you to easily diagnose issues at a glance. Embedded deep-links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. |
-| Webhook/Logic App/Azure Function/Automation Runbook | A consistent JSON structure for all alert types, which allows you to easily build integrations across the different alert types. |
+| Email | A consistent and detailed email template. You can use it to easily diagnose issues at a glance. Embedded deep links to the alert instance on the portal and the affected resource ensure that you can quickly jump into the remediation process. |
+| Webhook/Azure Logic Apps/Azure Functions/Azure Automation runbook | A consistent JSON structure for all alert types. You can use it to easily build integrations across the different alert types. |
The new schema will also enable a richer alert consumption experience across both the Azure portal and the Azure mobile app in the immediate future.
-[Learn more about the schema definitions for Webhooks/Logic Apps/Azure Functions/Automation Runbooks.](./alerts-common-schema-definitions.md)
+Learn more about the [schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md).
> [!NOTE]
-> The following actions do not support the common alert schema: ITSM Connector.
+> The following actions don't support the common alert schema ITSM Connector.
## How do I enable the common alert schema?
-You can opt in or opt out to the common alert schema through Action Groups, on both the portal and through the REST API. The toggle to switch to the new schema exists at an action level. For example, you have to separately opt in for an email action and a webhook action.
+Use action groups in the Azure portal or use the REST API to enable the common alert schema. You can enable a new schema at the action level. For example, you must separately opt in for an email action and a webhook action.
> [!NOTE]
-> 1. The following alert types support the common schema by default (no opt-in required):
-> - Smart detection alerts
-> 1. The following alert types currently do not support the common schema:
-> - Alerts generated by [VM insights](../vm/vminsights-overview.md)
+> Smart detection alerts support the common schema by default. No opt-in is required.
+>
+> Alerts generated by [VM insights](../vm/vminsights-overview.md) currently don't support the common schema.
+>
### Through the Azure portal
-![Common alert schema opt in](media/alerts-common-schema/portal-opt-in.png)
+![Screenshot that shows the common alert schema opt in.](media/alerts-common-schema/portal-opt-in.png)
-1. Open any existing or a new action in an action group.
-1. Select ΓÇÿYesΓÇÖ for the toggle to enable the common alert schema as shown.
+1. Open any existing action or a new action in an action group.
+1. Select **Yes** to enable the common alert schema.
### Through the Action Groups REST API
-You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. While making the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call, you can set the flag "useCommonAlertSchema" to 'true' (to opt in) or 'false' (to opt out) for any of the following actions - email/webhook/logic app/Azure Function/automation runbook.
+You can also use the [Action Groups API](/rest/api/monitor/actiongroups) to opt in to the common alert schema. While you make the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API call, you can set the flag "useCommonAlertSchema" to `true` to opt in or `false` to opt out for email, webhook, Logic Apps, Azure Functions, or Automation runbook actions.
-For example, the following request body made to the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API will do the following:
+For example, the following request body made to the [create or update](/rest/api/monitor/actiongroups/createorupdate) REST API will:
-- Enable the common alert schema for the email action "John Doe's email"-- Disable the common alert schema for the email action "Jane Smith's email"-- Enable the common alert schema for the webhook action "Sample webhook"
+- Enable the common alert schema for the email action "John Doe's email."
+- Disable the common alert schema for the email action "Jane Smith's email."
+- Enable the common alert schema for the webhook action "Sample webhook."
```json {
For example, the following request body made to the [create or update](/rest/api
## Next steps -- [Common alert schema definitions for Webhooks/Logic Apps/Azure Functions/Automation Runbooks.](./alerts-common-schema-definitions.md)-- [Learn how to create a logic app that leverages the common alert schema to handle all your alerts.](./alerts-common-schema-integrations.md)
+- [Learn the common alert schema definitions for webhooks, Logic Apps, Azure Functions, and Automation runbooks](./alerts-common-schema-definitions.md)
+- [Learn how to create a logic app that uses the common alert schema to handle all your alerts](./alerts-common-schema-integrations.md)
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
Title: Alert processing rules for Azure Monitor alerts
-description: Understanding what alert processing rules in Azure Monitor are and how to configure and manage them.
+description: Understand what alert processing rules in Azure Monitor are and how to configure and manage them.
Last updated 2/23/2022
<a name="suppression-of-alerts"></a> > [!NOTE]
-> The previous name for alert processing rules was **action rules**. The Azure resource type of these rules remains **Microsoft.AlertsManagement/actionRules** for backward compatibility.
+> The previous name for alert processing rules was action rules. The Azure resource type of these rules remains **Microsoft.AlertsManagement/actionRules** for backward compatibility.
-Alert processing rules allow you to apply processing on **fired alerts**. You may be familiar with Azure Monitor alert rules, which are rules that generate new alerts. Alert processing rules are different; they are rules that modify the fired alerts themselves as they are being fired. You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. Alert processing rules can be applied to different resource scopes, from a single resource to an entire subscription. They can also allow you to apply various filters or have the rule work on a pre-defined schedule.
+Alert processing rules allow you to apply processing on fired alerts. You might be familiar with Azure Monitor alert rules, which are rules that generate new alerts. Alert processing rules are different. They're rules that modify the fired alerts themselves as they're being fired.
-## What are alert processing rules useful for?
+You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. You can apply alert processing rules to different resource scopes, from a single resource, or to an entire subscription. You can also use them to apply various filters or have the rule work on a predefined schedule.
-Some common use cases for alert processing rules include:
+Some common use cases for alert processing rules are described here.
-### Notification suppression during planned maintenance
+## Suppress notifications during planned maintenance
-Many customers set up a planned maintenance time for their resources, either on a one-off basis or on a regular schedule. The planned maintenance may cover a single resource like a virtual machine, or multiple resources like all virtual machines in a resource group. So, you may want to stop receiving alert notifications for those resources during the maintenance window. In other cases, you may prefer to not receive alert notifications at all outside of your business hours. Alert processing rules allow you to achieve that.
+Many customers set up a planned maintenance time for their resources, either on a one-time basis or on a regular schedule. The planned maintenance might cover a single resource, like a virtual machine, or multiple resources, like all virtual machines in a resource group. So, you might want to stop receiving alert notifications for those resources during the maintenance window. In other cases, you might prefer to not receive alert notifications outside of your business hours. Alert processing rules allow you to achieve that.
+
+You could alternatively suppress alert notifications by disabling the alert rules themselves at the beginning of the maintenance window. Then you can reenable them after the maintenance is over. In that case, the alerts won't fire in the first place. That approach has several limitations:
+
+ * This approach is only practical if the scope of the alert rule is exactly the scope of the resources under maintenance. For example, a single alert rule might cover multiple resources, but only a few of those resources are going through maintenance. So, if you disable the alert rule, you won't be alerted when the remaining resources covered by that rule run into issues.
+ * You might have many alert rules that cover the resource. Updating all of them is time consuming and error prone.
+ * You might have some alerts that aren't created by an alert rule at all, like alerts from Azure Backup.
-You could alternatively suppress alert notifications by disabling the alert rules themselves at the beginning of the maintenance window, and re-enabling them once the maintenance is over. In that case, the alerts won't fire in the first place. However, that approach has several limitations:
- * This approach is only practical if the scope of the alert rule is exactly the scope of the resources under maintenance. For example, a single alert rule might cover multiple resources, but only a few of those resources are going through maintenance. So, if you disable the alert rule, you will not be alerted when the remaining resources covered by that rule run into issues.
- * You may have many alert rules that cover the resource. Updating all of them is time consuming and error prone.
- * You might have some alerts that are not created by an alert rule at all, like alerts from Azure Backup.
-
In all these cases, an alert processing rule provides an easy way to achieve the notification suppression goal.
-### Management at scale
+## Management at scale
-Most customers tend to define a few action groups that are used repeatedly in their alert rules. For example, they may want to call a specific action group whenever any high severity alert is fired. As their number of alert rule grows, manually making sure that each alert rule has the right set of action groups is becoming harder.
+Most customers tend to define a few action groups that are used repeatedly in their alert rules. For example, they might want to call a specific action group whenever any high-severity alert is fired. As their number of alert rules grows, manually making sure that each alert rule has the right set of action groups is becoming harder.
-Alert processing rules allow you to specify that logic in a single rule, instead of having to set it consistently in all your alert rules. They also cover alert types that are not generated by an alert rule.
+Alert processing rules allow you to specify that logic in a single rule, instead of having to set it consistently in all your alert rules. They also cover alert types that aren't generated by an alert rule.
-### Add action groups to all alert types
+## Add action groups to all alert types
Azure Monitor alert rules let you select which action groups will be triggered when their alerts are fired. However, not all Azure alert sources let you specify action groups. Some examples of such alerts include [Azure Backup alerts](../../backup/backup-azure-monitoring-built-in-monitor.md), [VM Insights guest health alerts](../vm/vminsights-health-alerts.md), [Azure Stack Edge](../../databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md), and Azure Stack Hub. For those alert types, you can use alert processing rules to add action groups. > [!NOTE]
-> Alert processing rules do not affect [Azure Service Health](../../service-health/service-health-overview.md) alerts.
+> Alert processing rules don't affect [Azure Service Health](../../service-health/service-health-overview.md) alerts.
-## Alert processing rule properties
+## Scope and filters for alert processing rules
<a name="filter-criteria"></a>
-An alert processing rule definition covers several aspects:
-
-### Which fired alerts are affected by this rule?
-
-**SCOPE**
-Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, or specific resource group, or an entire subscription. **The alert processing rule will apply to alerts that fired on resources within that scope**.
-
-**FILTERS**
-You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are:
-
-* **Alert Context (payload)** - the rule will apply only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type.
-* **Alert rule id** - the rule will apply only to alerts from a specific alert rule. The value should be the full resource ID, for example `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`.
-You can locate the alert rule ID by opening a specific alert rule in the portal, clicking "Properties", and copying the "Resource ID" value. You can also locate it by listing your alert rules from PowerShell or CLI.
-* **Alert rule name** - the rule will apply only to alerts with this alert rule name. Can also be useful with a "Contains" operator.
-* **Description** - the rule will apply only to alerts that contain the specified string within the alert rule description field.
-* **Monitor condition** - the rule will apply only to alerts with the specified monitor condition, either "Fired" or "Resolved".
-* **Monitor service** - the rule will apply only to alerts from any of the specified monitor services.
-For example, use "Platform" to have the rule apply only to metric alerts.
-* **Resource** - the rule will apply only to alerts from the specified Azure resource.
-For example, you can use this filter with "Does not equal" to exclude one or more resources when the rule's scope is a subscription.
-* **Resource group** - the rule will apply only to alerts from the specified resource groups.
-For example, you can use this filter with "Does not equal" to exclude one or more resource groups when the rule's scope is a subscription.
-* **Resource type** - the rule will apply only to alerts on resource from the specified resource types, such as virtual machines. You can use "Equals" to match one or more specific resources, or you can use contains to match a resource type and all its child resources.
-For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases.
-* **Severity** - the rule will apply only to alerts with the selected severities.
-
-**FILTERS BEHAVIOR**
-* If you define multiple filters in a rule, all of them apply - there is a logical AND between all filters.
- For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule will apply only for Sev0 alerts on virtual machines in the scope.
-* Each filter may include up to five values, and there is a logical OR between the values.
- For example, if you set `description contains ["this", "that"]`, then the rule will apply only to alerts whose description contains either "this" or "that".
+An alert processing rule definition covers several aspects, as described here.
+
+### Which fired alerts are affected by this rule?
+
+This section describes the scope and filters for alert processing rules.
+
+Each alert processing rule has a scope. A scope is a list of one or more specific Azure resources, a specific resource group, or an entire subscription. *The alert processing rule applies to alerts that fired on resources within that scope*.
+
+You can also define filters to narrow down which specific subset of alerts are affected within the scope. The available filters are described in the following table.
+
+| Filter | Description|
+|:|:|
+Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. |
+Alert rule ID | The rule applies only to alerts from a specific alert rule. The value should be the full resource ID, for example, `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. To locate the alert rule ID, open a specific alert rule in the portal, select **Properties**, and copy the **Resource ID** value. You can also locate it by listing your alert rules from PowerShell or the Azure CLI. |
+Alert rule name | The rule applies only to alerts with this alert rule name. It can also be useful with a **Contains** operator. |
+Description | The rule applies only to alerts that contain the specified string within the alert rule description field. |
+Monitor condition | The rule applies only to alerts with the specified monitor condition, either **Fired** or **Resolved**. |
+Monitor service | The rule applies only to alerts from any of the specified monitor services. For example, use **Platform** to have the rule apply only to metric alerts. |
+Resource | The rule applies only to alerts from the specified Azure resource. For example, you can use this filter with **Does not equal** to exclude one or more resources when the rule's scope is a subscription. |
+Resource group | The rule applies only to alerts from the specified resource groups. For example, you can use this filter with **Does not equal** to exclude one or more resource groups when the rule's scope is a subscription. |
+Resource type | The rule applies only to alerts on resources from the specified resource types, such as virtual machines. You can use **Equals** to match one or more specific resources. You can also use **Contains** to match a resource type and all its child resources. For example, use `resource type contains "MICROSOFT.SQL/SERVERS"` to match both SQL servers and all their child resources, like databases.
+Severity | The rule applies only to alerts with the selected severities. |
+
+#### Alert processing rule filters
+
+* If you define multiple filters in a rule, all the rules apply. There's a logical AND between all filters.
+ For example, if you set both `resource type = "Virtual Machines"` and `severity = "Sev0"`, then the rule applies only for `Sev0` alerts on virtual machines in the scope.
+* Each filter can include up to five values. There's a logical OR between the values.
+ For example, if you set `description contains ["this", "that"]`, then the rule applies only to alerts whose description contains either `this` or `that`.
### What should this rule do? Choose one of the following actions:
-* **Suppression**
-This action removes all the action groups from the affected fired alerts. So, the fired alerts will not invoke any of their action groups (not even at the end of the maintenance window). Those fired alerts will still be visible when you list your alerts in the portal, Azure Resource Graph, API, PowerShell etc.
-The suppression action has a higher priority over the "apply action groups" action - if a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
-
-* **Apply action groups**
-This action adds one or more action groups to the affected fired alerts.
+* **Suppression**: This action removes all the action groups from the affected fired alerts. So, the fired alerts won't invoke any of their action groups, not even at the end of the maintenance window. Those fired alerts will still be visible when you list your alerts in the portal, Azure Resource Graph, API, or PowerShell. The suppression action has a higher priority over the **Apply action groups** action. If a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
+* **Apply action groups**: This action adds one or more action groups to the affected fired alerts.
### When should this rule apply?
-You may optionally control when will the rule apply. By default, the rule is always active. However, you can select a one-off window for this rule to apply, or have a recurring window such as a weekly recurrence.
+You can control when the rule will apply. The rule is always active, by default. You can select a one-time window for this rule to apply, or you can have a recurring window, such as a weekly recurrence.
-## Configuring an alert processing rule
+## Configure an alert processing rule
### [Portal](#tab/portal)
-You can access alert processing rules by navigating to the **Alerts** home page in Azure Monitor.
-Once there, you can click **Alert processing rules** to see and manage your existing rules, or click **Create** --> **Alert processing rules** to open the new alert processing rule wizard.
+You can access alert processing rules by going to the **Alerts** home page in Azure Monitor. Then you can select **Alert processing rules** to see and manage your existing rules. You can also select **Create** > **Alert processing rules** to open the new alert processing rule wizard.
++
+Let's review the new alert processing rule wizard.
+1. On the **Scope** tab, you select which fired alerts are covered by this rule. Pick the **scope** of resources whose alerts will be covered. You can choose multiple resources and resource groups, or an entire subscription. You can also optionally add filters, as previously described.
-Lets review the new alert processing rule wizard.
-In the first tab (**Scope**), you select which fired alerts are covered by this rule. Pick the **scope** of resources whose alerts will be covered - you may choose multiple resources and resource groups, or an entire subscription. You may also optionally add **filters**, as documented above.
+ :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scope.png" alt-text="Screenshot that shows the Scope tab of the alert processing rules wizard.":::
+1. On the **Rule settings** tab, you select which action to apply on the affected alerts. Choose between **Suppress notifications** or **Apply action group**. If you choose **Apply action group**, you can select existing action groups by selecting **Add action groups**. You can also create a new action group.
-In the second tab (**Rule settings**), you select which action to apply on the affected alerts. Choose between **Suppression** or **Apply action group**. If you choose the apply action group, you can either select existing action groups by clicking **Add action groups**, or create a new action group.
+ :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-settings.png" alt-text="Screenshot that shows the Rule settings tab of the alert processing rules wizard.":::
+1. On the **Scheduling** tab, you select an optional schedule for the rule. By default, the rule works all the time, unless you disable it. You can set it to work **On a specific time**, or you can set up a **Recurring** schedule.
+
+ Let's see an example of a schedule for a one-time, overnight, planned maintenance. It starts in the evening and continues until the next morning, in a specific time zone.
-In the third tab (**Scheduling**), you select an optional schedule for the rule. By default the rule works all the time, unless you disable it. However, you can set it to work **on a specific time**, or **set up a recurring schedule**.
-Let's see an example of a schedule for a one-off, overnight, planned maintenance. It starts in the evening until the next morning, in a specific timezone:
+ :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scheduling-one-time.png" alt-text="Screenshot that shows the Scheduling tab of the alert processing rules wizard with a one-time rule.":::
+ An example of a more complex schedule covers an "outside of business hours" case. It has a recurring schedule with two recurrences. One recurrence is daily from the afternoon until the morning. The other recurrence is weekly and covers full days for Saturday and Sunday.
-Let's see an example of a more complex schedule, covering an "outside of business hours" case. It has a recurring schedule with two recurrences - a daily one from the afternoon until the morning, and a weekly one covering Saturday and Sunday (full days).
+ :::image type="content" source="media/alerts-processing-rules/alert-processing-rule-scheduling-recurring.png" alt-text="Screenshot that shows the Scheduling tab of the alert processing rules wizard with a recurring rule.":::
+1. On the **Details** tab, you give this rule a name, pick where it will be stored, and optionally add a description for your reference.
-In the fourth tab (**Details**), you give this rule a name, pick where it will be stored, and optionally add a description for your reference. In the fifth tab (**Tags**), you optionally add tags to the rule, and finally in the last tab you can review and create the alert processing rule.
+1. On the **Tags** tab, you can optionally add tags to the rule.
+
+1. On the **Review + create** tab, you can review and create the alert processing rule.
### [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to work with alert processing rules. See the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule) for detailed documentation and examples.
+You can use the Azure CLI to work with alert processing rules. For detailed documentation and examples, see the `az monitor alert-processing-rules` [page in the Azure CLI docs](/cli/azure/monitor/alert-processing-rule).
### Prepare your environment
-1. **Install the Auzre CLI**
-
- Follow the [Installation instructions for the Azure CLI](/cli/azure/install-azure-cli).
+1. Install the Azure CLI.
- Alternatively, you can use Azure Cloud Shell, which is an interactive shell environment that you use through your browser. To start a Cloud Shell:
+ Follow the [installation instructions for the Azure CLI](/cli/azure/install-azure-cli).
- - Open Cloud Shell by going to [https://shell.azure.com](https://shell.azure.com)
+ Alternatively, you can use Azure Cloud Shell, which is an interactive shell environment that you use through your browser. To start:
- - Select the **Cloud Shell** button on the menu bar at the upper right corner in the [Azure portal](https://portal.azure.com)
+ - Open [Azure Cloud Shell](https://shell.azure.com).
+ - Select the **Cloud Shell** button on the menu bar in the upper-right corner in the [Azure portal](https://portal.azure.com).
-1. **Sign in**
+1. Sign in.
- If you're using a local installation of the CLI, sign in using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process.
+ If you're using a local installation of the CLI, sign in by using the `az login` [command](/cli/azure/reference-index#az-login). Follow the steps displayed in your terminal to complete the authentication process.
```azurecli az login ```
-1. **Install the `alertsmanagement` extension**
+1. Install the `alertsmanagement` extension.
- In order to use the `az monitor alert-processing-rule` commands, install the `alertsmanagement` preview extension.
+ To use the `az monitor alert-processing-rule` commands, install the `alertsmanagement` preview extension.
```azurecli az extension add --name alertsmanagement
You can use the Azure CLI to work with alert processing rules. See the `az monit
The installed extension 'alertsmanagement' is in preview. ```
- To learn more about Azure CLI extensions, check [Use extension with Azure CLI](/cli/azure/azure-cli-extensions-overview?).
+ To learn more about Azure CLI extensions, see [Use extensions with the Azure CLI](/cli/azure/azure-cli-extensions-overview?).
### Create an alert processing rule with the Azure CLI
az monitor alert-processing-rule create \
--description "Add action group AG1 to all alerts in the subscription" ```
-The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
+The [CLI documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) includes more examples and an explanation of each parameter.
### [PowerShell](#tab/powershell)
-You can use PowerShell to work with alert processing rules. See the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement) for detailed documentation and examples.
-
+You can use PowerShell to work with alert processing rules. For detailed documentation and examples, see the `*-AzAlertProcessingRule` commands [in the PowerShell docs](/powershell/module/az.alertsmanagement).
### Create an alert processing rule using PowerShell
-Use the `Set-AzAlertProcessingRule` command to create alert processing rules.
-For example, to create a rule that adds an action group to all alerts in a subscription, run:
+Use the `Set-AzAlertProcessingRule` command to create alert processing rules. For example, to create a rule that adds an action group to all alerts in a subscription, run:
```powershell Set-AzAlertProcessingRule `
Set-AzAlertProcessingRule `
-Description "Add action group AG1 to all alerts in the subscription" ```
-The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) include more examples and an explanation of each parameter.
+The [PowerShell documentation](/cli/azure/monitor/alert-processing-rule#az-monitor-alert-processing-rule-create) includes more examples and an explanation of each parameter.
* * *
-## Managing alert processing rules
+## Manage alert processing rules
### [Portal](#tab/portal) You can view and manage your alert processing rules from the list view:
-From here, you can enable, disable, or delete alert processing rules at scale by selecting the check box next to them. Clicking on an alert processing rule will open it for editing - you can enable or disable the rule in the fourth tab (**Details**).
+From here, you can enable, disable, or delete alert processing rules at scale by selecting the checkboxes next to them. Selecting an alert processing rule opens it for editing. You can enable or disable the rule on the **Details** tab.
### [Azure CLI](#tab/azure-cli)
-You can view and manage your alert processing rules using the [az monitor alert-processing-rules](/cli/azure/monitor/alert-processing-rule) commands from Azure CLI.
+You can view and manage your alert processing rules by using the [az monitor alert-processing-rules](/cli/azure/monitor/alert-processing-rule) commands from Azure CLI.
-Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule).
+Before you manage alert processing rules with the Azure CLI, prepare your environment by using the instructions provided in [Configure an alert processing rule](#configure-an-alert-processing-rule).
```azurecli # List all alert processing rules for a subscription
az monitor alert-processing-rules delete --resource-group RG1 --name MyRule
### [PowerShell](#tab/powershell)
-You can view and manage your alert processing rules using the [\*-AzAlertProcessingRule](/powershell/module/az.alertsmanagement) commands from Azure CLI.
+You can view and manage your alert processing rules by using the [\*-AzAlertProcessingRule](/powershell/module/az.alertsmanagement) commands from the Azure CLI.
-Before you manage alert processing rules with the Azure CLI, prepare your environment using the instructions provided in [Configuring an alert processing rule](#configuring-an-alert-processing-rule).
+Before you manage alert processing rules with the Azure CLI, prepare your environment by following the instructions in [Configure an alert processing rule](#configure-an-alert-processing-rule).
```powershell # List all alert processing rules for a subscription
Remove-AzAlertProcessingRule -ResourceGroupName RG1 -Name MyRule
## Next steps -- [Learn more about alerts in Azure](./alerts-overview.md)
+[Learn more about alerts in Azure](./alerts-overview.md)
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
The preceding steps are enough to help you start collecting server-side telemetr
1. In `_ViewImports.cshtml`, add injection:
-```cshtml
- @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
-```
+ ```cshtml
+ @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
+ ```
2. In `_Layout.cshtml`, insert `HtmlHelper` at the end of the `<head>` section but before any other script. If you want to report any custom JavaScript telemetry from the page, inject it after this snippet:
-```cshtml
- @Html.Raw(JavaScriptSnippet.FullScript)
- </head>
-```
+ ```cshtml
+ @Html.Raw(JavaScriptSnippet.FullScript)
+ </head>
+ ```
As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
The `.cshtml` file names referenced earlier are from a default MVC application t
If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the snippet to multiple pages, but we don't recommend it. > [!NOTE]
-> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk).
+> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk).
## Configure the Application Insights SDK
Application Insights automatically collects telemetry about specific workloads w
By default, the following automatic-collection modules are enabled. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior.
-* `RequestTrackingTelemetryModule` - Collects RequestTelemetry from incoming web requests
-* `DependencyTrackingTelemetryModule` - Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing http calls and sql calls
-* `PerformanceCollectorModule` - Collects Windows PerformanceCounters
-* `QuickPulseTelemetryModule` - Collects telemetry for showing in Live Metrics portal
-* `AppServicesHeartbeatTelemetryModule` - Collects heart beats (which are sent as custom metrics), about Azure App Service environment where application is hosted
-* `AzureInstanceMetadataTelemetryModule` - Collects heart beats (which are sent as custom metrics), about Azure VM environment where application is hosted
-* `EventCounterCollectionModule` - Collects [EventCounters](eventcounters.md); this module is a new feature and is available in SDK version 2.8.0 and later
+* `RequestTrackingTelemetryModule`: Collects RequestTelemetry from incoming web requests
+* `DependencyTrackingTelemetryModule`: Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing http calls and sql calls
+* `PerformanceCollectorModule`: Collects Windows PerformanceCounters
+* `QuickPulseTelemetryModule`: Collects telemetry for showing in Live Metrics portal
+* `AppServicesHeartbeatTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure App Service environment where application is hosted
+* `AzureInstanceMetadataTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure VM environment where application is hosted
+* `EventCounterCollectionModule`: Collects [EventCounters](eventcounters.md); this module is a new feature and is available in SDK version 2.8.0 and later
To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example.
If you want to disable telemetry conditionally and dynamically, you can resolve
} ```
-The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [remove the telemetry module](#configuring-or-removing-default-telemetrymodules).
+The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [Remove the telemetry module](#configuring-or-removing-default-telemetrymodules).
## Frequently asked questions
For more information about custom data reporting in Application Insights, see [A
### How do I customize ILogger logs collection?
-By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown below.
-The following configuration allows ApplicationInsights to capture all `Information` logs and more severe logs.
+By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown in the following code.
+The following configuration allows Application Insights to capture all `Information` logs and more severe logs.
```json {
The following configuration allows ApplicationInsights to capture all `Informati
} ```
-It's important to note that the following example doesn't cause the ApplicationInsights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. ApplicationInsights requires an explicit override.
+It's important to note that the following example doesn't cause the Application Insights provider to capture `Information` logs. It doesn't capture it because the SDK adds a default logging filter that instructs `ApplicationInsights` to capture only `Warning` logs and more severe logs. Application Insights requires an explicit override.
```json {
If the SDK is installed at build time as shown in this article, you don't need t
Yes. Feature support for the SDK is the same in all platforms, with the following exceptions:
-* The SDK collects [Event Counters](./eventcounters.md) on Linux because [Performance Counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.
+* The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.
* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel:
-```csharp
-using Microsoft.ApplicationInsights.Channel;
-using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
-
- public void ConfigureServices(IServiceCollection services)
- {
- // The following will configure the channel to use the given folder to temporarily
- // store telemetry items during network or Application Insights server issues.
- // User should ensure that the given folder already exists
- // and that the application has read/write permissions.
- services.AddSingleton(typeof(ITelemetryChannel),
- new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
- services.AddApplicationInsightsTelemetry();
- }
-```
+ ```csharp
+ using Microsoft.ApplicationInsights.Channel;
+ using Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel;
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // The following will configure the channel to use the given folder to temporarily
+ // store telemetry items during network or Application Insights server issues.
+ // User should ensure that the given folder already exists
+ // and that the application has read/write permissions.
+ services.AddSingleton(typeof(ITelemetryChannel),
+ new ServerTelemetryChannel () {StorageFolder = "/tmp/myfolder"});
+ services.AddApplicationInsightsTelemetry();
+ }
+ ```
This limitation isn't applicable from version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later. ### Is this SDK supported for the new .NET Core 3.X Worker Service template applications?
-This SDK requires `HttpContext`; therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
+This SDK requires `HttpContext`. Therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
## Open-source SDK
For the latest updates and bug fixes, see the [release notes](./release-notes.md
* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
-* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
+* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
az monitor app-insights component create --app demoApp --location eastus --kind
For the full Azure CLI documentation for this command, consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
-### Azure PowerShell
+### Azure PowerShell
+Create a new workspace-based Application Insights resource
+
+```powershell
+New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location <String> -WorkspaceResourceId <String>
+ [-SubscriptionId <String>]
+ [-ApplicationType <ApplicationType>]
+ [-DisableIPMasking]
+ [-DisableLocalAuth]
+ [-Etag <String>]
+ [-FlowType <FlowType>]
+ [-ForceCustomerStorageForProfiler]
+ [-HockeyAppId <String>]
+ [-ImmediatePurgeDataOn30Day]
+ [-IngestionMode <IngestionMode>]
+ [-Kind <String>]
+ [-PublicNetworkAccessForIngestion <PublicNetworkAccessType>]
+ [-PublicNetworkAccessForQuery <PublicNetworkAccessType>]
+ [-RequestSource <RequestSource>]
+ [-RetentionInDays <Int32>]
+ [-SamplingPercentage <Double>]
+ [-Tag <Hashtable>]
+ [-DefaultProfile <PSObject>]
+ [-Confirm]
+ [-WhatIf]
+ [<CommonParameters>]
+```
+
+#### Example
+
+```powershell
+New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus -WorkspaceResourceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555"
+```
+
+For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key consult the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
-The `New-AzApplicationInsights` PowerShell command does not currently support creating a workspace-based Application Insights resource. To create a workspace-based resource with PowerShell, you can use the Azure Resource Manager templates below and deploy with PowerShell.
### Azure Resource Manager templates
+ To create a workspace-based resource, you can use the Azure Resource Manager templates below and deploy with PowerShell.
+ #### Template file ```json
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
Title: Distributed Tracing in Azure Application Insights | Microsoft Docs
-description: Provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project
+ Title: Distributed tracing in Azure Application Insights | Microsoft Docs
+description: This article provides information about Microsoft's support for distributed tracing through our partnership in the OpenCensus project.
Last updated 09/17/2018
-# What is Distributed Tracing?
+# What is distributed tracing?
-The advent of modern cloud and [microservices](https://azure.com/microservices) architectures has given rise to simple, independently deployable services that can help reduce costs while increasing availability and throughput. But while these movements have made individual services easier to understand as a whole, they've made overall systems more difficult to reason about and debug.
+The advent of modern cloud and [microservices](https://azure.com/microservices) architectures has given rise to simple, independently deployable services that can help reduce costs while increasing availability and throughput. These movements have made individual services easier to understand. But they've also made overall systems more difficult to reason about and debug.
-In monolithic architectures, we've gotten used to debugging with call stacks. Call stacks are brilliant tools for showing the flow of execution (Method A called Method B, which called Method C), along with details and parameters about each of those calls. This is great for monoliths or services running on a single process, but how do we debug when the call is across a process boundary, not simply a reference on the local stack?
+In monolithic architectures, we've gotten used to debugging with call stacks. Call stacks are brilliant tools for showing the flow of execution (Method A called Method B, which called Method C), along with details and parameters about each of those calls. This technique is great for monoliths or services running on a single process. But how do we debug when the call is across a process boundary, not simply a reference on the local stack?
-That's where distributed tracing comes in.
+That's where distributed tracing comes in.
-Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences for consuming distributed trace data. The first is our [transaction diagnostics](./transaction-diagnostics.md) view, which is like a call stack with a time dimension added in. The transaction diagnostics view provides visibility into one single transaction/request, and is helpful for finding the root cause of reliability issues and performance bottlenecks on a per request basis.
+Distributed tracing is the equivalent of call stacks for modern cloud and microservices architectures, with the addition of a simplistic performance profiler thrown in. In Azure Monitor, we provide two experiences for consuming distributed trace data. The first is our [transaction diagnostics](./transaction-diagnostics.md) view, which is like a call stack with a time dimension added in. The transaction diagnostics view provides visibility into one single transaction/request. It's helpful for finding the root cause of reliability issues and performance bottlenecks on a per-request basis.
-Azure Monitor also offers an [application map](./app-map.md) view which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.
+Azure Monitor also offers an [application map](./app-map.md) view, which aggregates many transactions to show a topological view of how the systems interact. The map view also shows what the average performance and error rates are.
-## How to Enable Distributed Tracing
+## Enable distributed tracing
Enabling distributed tracing across the services in an application is as simple as adding the proper agent, SDK, or library to each service, based on the language the service was implemented in.
-## Enabling via Application Insights through auto-instrumentation or SDKs
+## Enable via Application Insights through auto-instrumentation or SDKs
-The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available below:
+The Application Insights agents and SDKs for .NET, .NET Core, Java, Node.js, and JavaScript all support distributed tracing natively. Instructions for installing and configuring each Application Insights SDK are available for:
* [.NET](asp-net.md) * [.NET Core](asp-net-core.md)
The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js,
* [JavaScript](./javascript.md#enable-distributed-tracing) * [Python](opencensus-python.md)
-With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in [the Dependency auto-collection documentation](./auto-collect-dependencies.md).
+With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in the [Dependency auto-collection documentation](./auto-collect-dependencies.md).
- Additionally, any technology can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md).
+ Any technology also can be tracked manually with a call to [TrackDependency](./api-custom-events-metrics.md) on the [TelemetryClient](./api-custom-events-metrics.md).
## Enable via OpenTelemetry
-Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially the OpenTelemetry community took on Distributed Tracing. Metrics and Logs are still in progress. A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include Distributed Tracing. However, our Java OpenTelemetry-based Azure Monitor offering is GA and fully supported.
+Application Insights now supports distributed tracing through [OpenTelemetry](https://opentelemetry.io/). OpenTelemetry provides a vendor-neutral instrumentation to send traces, metrics, and logs to Application Insights. Initially, the OpenTelemetry community took on distributed tracing. Metrics and logs are still in progress.
-The following pages consist of language-by-language guidance to enable and configure MicrosoftΓÇÖs OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
+A complete observability story includes all three pillars, but currently our [Azure Monitor OpenTelemetry-based exporter preview offerings for .NET, Python, and JavaScript](opentelemetry-enable.md) only include distributed tracing. Our Java OpenTelemetry-based Azure Monitor offering is generally available and fully supported.
+
+The following pages consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings. Importantly, we share the available functionality and limitations of each offering so you can determine whether OpenTelemetry is right for your project.
* [.NET](opentelemetry-enable.md?tabs=net) * [Java](java-in-process-agent.md)
The following pages consist of language-by-language guidance to enable and confi
## Enable via OpenCensus
-In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
+In addition to the Application Insights SDKs, Application Insights also supports distributed tracing through [OpenCensus](https://opencensus.io/). OpenCensus is an open-source, vendor-agnostic, single distribution of libraries to provide metrics collection and distributed tracing for services. It also enables the open-source community to enable distributed tracing with popular technologies like Redis, Memcached, or MongoDB. [Microsoft collaborates on OpenCensus with several other monitoring and cloud partners](https://open.microsoft.com/2018/06/13/microsoft-joins-the-opencensus-project/).
-[Python](opencensus-python.md)
+For more information on OpenCensus for Python, see [Set up Azure Monitor for your Python application](opencensus-python.md).
-The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html) and [Go](https://godoc.org/go.opencensus.io), as well as various different guides for using OpenCensus.
+The OpenCensus website maintains API reference documentation for [Python](https://opencensus.io/api/python/trace/usage.html), [Go](https://godoc.org/go.opencensus.io), and various guides for using OpenCensus.
## Next steps * [OpenCensus Python usage guide](https://opencensus.io/api/python/trace/usage.html) * [Application map](./app-map.md) * [End-to-end performance monitoring](../app/tutorial-performance.md)-
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Title: Azure Application Insights for JavaScript web apps
-description: Get page view and session counts, web client data, Single Page Applications (SPA), and track usage patterns. Detect exceptions and performance issues in JavaScript web pages.
+description: Get page view and session counts, web client data, and single-page applications and track usage patterns. Detect exceptions and performance issues in JavaScript webpages.
Last updated 08/06/2020 ms.devlang: javascript
-# Application Insights for web pages
+# Application Insights for webpages
> [!NOTE]
-> We continue to assess the viability of OpenTelemetry for browser scenarios. The Application Insights JavaScript SDK is recommended for the forseeable future, which is fully compatible with OpenTelemetry distributed tracing.
+> We continue to assess the viability of OpenTelemetry for browser scenarios. We recommend the Application Insights JavaScript SDK for the forseeable future. It's fully compatible with OpenTelemetry distributed tracing.
-Find out about the performance and usage of your web page or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures, as well as users and session counts. All of this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. And by inserting trace calls in your JavaScript code, you can track how the different features of your web page application are used.
+Find out about the performance and usage of your webpage or app. If you add [Application Insights](app-insights-overview.md) to your page script, you get timings of page loads and AJAX calls, counts, and details of browser exceptions and AJAX failures. You also get user and session counts. All this telemetry can be segmented by page, client OS and browser version, geo location, and other dimensions. You can set alerts on failure counts or slow page loading. By inserting trace calls in your JavaScript code, you can track how the different features of your webpage application are used.
-Application Insights can be used with any web pages - you just add a short piece of JavaScript, Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
+Application Insights can be used with any webpages by adding a short piece of JavaScript. Node.js has a [standalone SDK](nodejs.md). If your web service is [Java](java-in-process-agent.md) or [ASP.NET](asp-net.md), you can use the server-side SDKs with the client-side JavaScript SDK to get an end-to-end understanding of your app's performance.
-## Adding the JavaScript SDK
+## Add the JavaScript SDK
-1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the [create a new resource instructions](create-new-resource.md).
-2. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1.) You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK.
-3. Add the Application Insights JavaScript SDK to your web page or app via one of the following two options:
- * [npm Setup](#npm-based-setup)
- * [JavaScript Snippet](#snippet-based-setup)
+1. First you need an Application Insights resource. If you don't already have a resource and connection string, follow the instructions to [create a new resource](create-new-resource.md).
+1. Copy the [connection string](#connection-string-setup) for the resource where you want your JavaScript telemetry to be sent (from step 1). You'll add it to the `connectionString` setting of the Application Insights JavaScript SDK.
+1. Add the Application Insights JavaScript SDK to your webpage or app via one of the following two options:
+ * [Node Package Manager (npm) setup](#npm-based-setup)
+ * [JavaScript snippet](#snippet-based-setup)
> [!WARNING]
-> `@microsoft/applicationinsights-web-basic - AISKULight` does not support the use of connection strings.
+> `@microsoft/applicationinsights-web-basic - AISKULight` doesn't support the use of connection strings.
-> [!IMPORTANT]
-> Only use one method to add the JavaScript SDK to your application. If you use the NPM Setup, don't use the Snippet and vice versa.
+Only use one method to add the JavaScript SDK to your application. If you use the npm setup, don't use the snippet and vice versa.
> [!NOTE]
-> NPM Setup installs the JavaScript SDK as a dependency to your project, enabling IntelliSense, whereas the Snippet fetches the SDK at runtime. Both support the same features. However, developers who desire more custom events and configuration generally opt for NPM Setup whereas users looking for quick enablement of out-of-the-box web analytics opt for the Snippet.
+> The npm setup installs the JavaScript SDK as a dependency to your project and enables IntelliSense. The snippet fetches the SDK at runtime. Both support the same features. Developers who want more custom events and configuration generally opt for the npm setup. Users who are looking for quick enablement of out-of-the-box web analytics opt for the snippet.
-### npm based setup
+### npm-based setup
-Install via Node Package Manager (npm).
+Install via npm.
```sh npm i --save @microsoft/applicationinsights-web ``` > [!Note]
-> **Typings are included with this package**, so you do **not** need to install a separate typings package.
+> *Typings are included with this package*, so you do *not* need to install a separate typings package.
```js import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.loadAppInsights();
appInsights.trackPageView(); // Manually call trackPageView to establish the current user/session/pageview ```
-### Snippet based setup
+### Snippet-based setup
-If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each your pages. Preferably, it should be the first script in your `<head>` section so that it can monitor any potential issues with all of your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
+If your app doesn't use npm, you can directly instrument your webpages with Application Insights by pasting this snippet at the top of each of your pages. Preferably, it should be the first script in your `<head>` section. That way it can monitor any potential issues with all your dependencies and optionally any JavaScript errors. If you're using Blazor Server App, add the snippet at the top of the file `_Host.cshtml` in the `<head>` section.
-Starting from version 2.5.5, the page view event will include a new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using.
+Starting from version 2.5.5, the page view event will include the new tag "ai.internal.snippet" that contains the identified snippet version. This feature assists with tracking which version of the snippet your application is using.
-The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+The current snippet that follows is version "5." The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```html <script type="text/javascript">
cfg: { // Application Insights Configuration
``` > [!NOTE]
-> For readability and to reduce possible JavaScript errors, all of the possible configuration options are listed on a new line in snippet code above, if you don't want to change the value of a commented line it can be removed.
+> For readability and to reduce possible JavaScript errors, all the possible configuration options are listed on a new line in the preceding snippet code. If you don't want to change the value of a commented line, it can be removed.
+#### Report script load failures
-#### Reporting Script load failures
+This version of the snippet detects and reports failures when the SDK is loaded from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you've lost telemetry because the SDK didn't load or initialize, which can lead to:
-This version of the snippet detects and reports failures when loading the SDK from the CDN as an exception to the Azure Monitor portal (under the failures &gt; exceptions &gt; browser). The exception provides visibility into failures of this type so that you're aware your application isn't reporting telemetry (or other exceptions) as expected. This signal is an important measurement in understanding that you have lost telemetry because the SDK didn't load or initialize which can lead to:
-- Under-reporting of how users are using (or trying to use) your site;-- Missing telemetry on how your end users are using your site;-- Missing JavaScript errors that could potentially be blocking your end users from successfully using your site.
+- Underreporting of how users are using or trying to use your site.
+- Missing telemetry on how your users are using your site.
+- Missing JavaScript errors that could potentially be blocking your users from successfully using your site.
-For details on this exception see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page.
+For information on this exception, see the [SDK load failure](javascript-sdk-load-failure.md) troubleshooting page.
-Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the application insights configuration and therefore if this failure occurs it will always be reported by the snippet, even when the window.onerror support is disabled.
+Reporting of this failure as an exception to the portal doesn't use the configuration option ```disableExceptionTracking``` from the Application Insights configuration. For this reason, if this failure occurs, it will always be reported by the snippet, even when `window.onerror` support is disabled.
-Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively IE 8 or less. If you have this requirement and you wish to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```, it's recommended that you use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
+Reporting of SDK load failures isn't supported on Internet Explorer 8 or earlier. This behavior reduces the minified size of the snippet by assuming that most environments aren't exclusively Internet Explorer 8 or less. If you have this requirement and you want to receive these exceptions, you'll need to either include a fetch poly fill or create your own snippet version that uses ```XDomainRequest``` instead of ```XMLHttpRequest```. Use the [provided snippet source code](https://github.com/microsoft/ApplicationInsights-JS/blob/master/AISKU/snippet/snippet.js) as a starting point.
> [!NOTE]
-> If you are using a previous version of the snippet, it is highly recommended that you update to the latest version so that you will receive these previously unreported issues.
+> If you're using a previous version of the snippet, update to the latest version so that you'll receive these previously unreported issues.
#### Snippet configuration options
-All configuration options have been moved towards the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
+All configuration options have been moved toward the end of the script. This placement avoids accidentally introducing JavaScript errors that wouldn't just cause the SDK to fail to load, but also it would disable the reporting of the failure.
-Each configuration option is shown above on a new line, if you don't wish to override the default value of an item listed as [optional] you can remove that line to minimize the resulting size of your returned page.
+Each configuration option is shown above on a new line. If you don't want to override the default value of an item listed as [optional], you can remove that line to minimize the resulting size of your returned page.
+
+The available configuration options are listed in this table.
-The available configuration options are
-
| Name | Type | Description |||-
-| src | string **[required]** | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
-| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. Note: if you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK) then this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
-| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms and any negative value will immediately add a script tag to the &lt;head&gt; region of the page, which will then block the page load event until to script is loaded (or fails).
-| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fallback to XHR, setting this value to true just bypasses the fetch check. Use of this value is only be required if your application is being used in an environment where fetch would fail to send the failure events.
-| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default) no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous" (For all valid values see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation)
-| cfg | object **[required]** | The configuration passed to the Application Insights SDK during initialization.
+| src | string *[required]* | The full URL for where to load the SDK from. This value is used for the "src" attribute of a dynamically added &lt;script /&gt; tag. You can use the public CDN location or your own privately hosted one.
+| name | string *[optional]* | The global name for the initialized SDK, defaults to `appInsights`. So ```window.appInsights``` will be a reference to the initialized instance. If you provide a name value or a previous instance appears to be assigned (via the global name appInsightsSDK), this name value will also be defined in the global namespace as ```window.appInsightsSDK=<name value>```. The SDK initialization code uses this reference to ensure it's initializing and updating the correct snippet skeleton and proxy methods.
+| ld | number in ms *[optional]* | Defines the load delay to wait before attempting to load the SDK. Default value is 0ms. Any negative value will immediately add a script tag to the &lt;head&gt; region of the page. The page load event is then blocked until the script is loaded or fails.
+| useXhr | boolean *[optional]* | This setting is used only for reporting SDK load failures. Reporting will first attempt to use fetch() if available and then fall back to XHR. Setting this value to true just bypasses the fetch check. Use of this value is only required if your application is being used in an environment where fetch would fail to send the failure events.
+| crossOrigin | string *[optional]* | By including this setting, the script tag added to download the SDK will include the crossOrigin attribute with this string value. When not defined (the default), no crossOrigin attribute is added. Recommended values aren't defined (the default); ""; or "anonymous." For all valid values, see [HTML attribute: `crossorigin`](https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes/crossorigin) documentation.
+| cfg | object *[required]* | The configuration passed to the Application Insights SDK during initialization.
-### Connection String Setup
+### Connection string setup
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
appInsights.loadAppInsights();
appInsights.trackPageView(); ```
-### Sending telemetry to the Azure portal
+### Send telemetry to the Azure portal
-By default, the Application Insights JavaScript SDK auto-collects many telemetry items that are helpful in determining the health of your application and the underlying user experience.
+By default, the Application Insights JavaScript SDK autocollects many telemetry items that are helpful in determining the health of your application and the underlying user experience.
This telemetry includes: -- **Uncaught exceptions** in your app, including information on
- - Stack trace
- - Exception details and message accompanying the error
- - Line & column number of error
- - URL where error was raised
-- **Network Dependency Requests** made by your app **XHR** and **Fetch** (fetch collection is disabled by default) requests, include information on
- - Url of dependency source
- - Command & Method used to request the dependency
- - Duration of the request
- - Result code and success status of the request
- - ID (if any) of user making the request
- - Correlation context (if any) where request is made
-- **User information** (for example, Location, network, IP)-- **Device information** (for example, Browser, OS, version, language, model)
+- **Uncaught exceptions** in your app, including information on the:
+ - Stack trace.
+ - Exception details and message accompanying the error.
+ - Line and column number of the error.
+ - URL where the error was raised.
+- **Network Dependency Requests** made by your app **XHR** and **Fetch** (fetch collection is disabled by default) requests include information on the:
+ - URL of dependency source.
+ - Command and method used to request the dependency.
+ - Duration of the request.
+ - Result code and success status of the request.
+ - ID (if any) of the user making the request.
+ - Correlation context (if any) where the request is made.
+- **User information** (for example, location, network, IP)
+- **Device information** (for example, browser, OS, version, language, model)
- **Session information** ### Telemetry initializers
-Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent, by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance, and they're executed in order of adding them.
-The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns a `boolean` or `void`. If returning `false`, the telemetry item isn't sent, else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
+Telemetry initializers are used to modify the contents of collected telemetry before being sent from the user's browser. They can also be used to prevent certain telemetry from being sent by returning `false`. Multiple telemetry initializers can be added to your Application Insights instance. They're executed in the order of adding them.
+
+The input argument to `addTelemetryInitializer` is a callback that takes a [`ITelemetryItem`](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#addTelemetryInitializer) as an argument and returns `boolean` or `void`. If `false` is returned, the telemetry item isn't sent, or else it proceeds to the next telemetry initializer, if any, or is sent to the telemetry collection endpoint.
An example of using telemetry initializers:+ ```ts var telemetryInitializer = (envelope) => { envelope.data.someField = 'This item passed through my telemetry initializer';
appInsights.trackTrace({message: 'this message will not be sent'}); // Not sent
``` ## Configuration
-Most configuration fields are named such that they can be defaulted to false. All fields are optional except for `connectionString`.
+
+Most configuration fields are named so that they can default to false. All fields are optional except for `connectionString`.
| Name | Description | Default | ||-||
-| connectionString | **Required**<br>Connection string that you obtained from the Azure portal. | string<br/>null |
-| accountId | An optional account ID, if your app groups users into accounts. No spaces, commas, semicolons, equals, or vertical bars | string<br/>null |
+| connectionString | *Required*<br>Connection string that you obtained from the Azure portal. | string<br/>null |
+| accountId | An optional account ID if your app groups users into accounts. No spaces, commas, semicolons, equal signs, or vertical bars. | string<br/>null |
| sessionRenewalMs | A session is logged if the user is inactive for this amount of time in milliseconds. | numeric<br/>1800000<br/>(30 mins) | | sessionExpirationMs | A session is logged if it has continued for this amount of time in milliseconds. | numeric<br/>86400000<br/>(24 hours) |
-| maxBatchSizeInBytes | Max size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started | numeric<br/>10000 |
-| maxBatchInterval | How long to batch telemetry for before sending (milliseconds) | numeric<br/>15000 |
+| maxBatchSizeInBytes | Maximum size of telemetry batch. If a batch exceeds this limit, it's immediately sent and a new batch is started. | numeric<br/>10000 |
+| maxBatchInterval | How long to batch telemetry before sending (milliseconds). | numeric<br/>15000 |
| disable&#8203;ExceptionTracking | If true, exceptions aren't autocollected. | boolean<br/> false | | disableTelemetry | If true, telemetry isn't collected or sent. | boolean<br/>false |
-| enableDebug | If true, **internal** debugging data is thrown as an exception **instead** of being logged, regardless of SDK logging settings. Default is false. <br>***Note:*** Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
-| loggingLevelConsole | Logs **internal** Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 |
-| loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 |
-| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 |
-| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
-| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
+| enableDebug | If true, *internal* debugging data is thrown as an exception *instead* of being logged, regardless of SDK logging settings. Default is false. <br>*Note:* Enabling this setting will result in dropped telemetry whenever an internal error occurs. This setting can be useful for quickly identifying issues with your configuration or usage of the SDK. If you don't want to lose telemetry while debugging, consider using `loggingLevelConsole` or `loggingLevelTelemetry` instead of `enableDebug`. | boolean<br/>false |
+| loggingLevelConsole | Logs *internal* Application Insights errors to console. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 0 |
+| loggingLevelTelemetry | Sends *internal* Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 |
+| diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue. | numeric<br/> 10000 |
+| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you want to preserve your data cap for large-scale applications. | numeric<br/>100 |
+| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (Internet Explorer 8 or less). Default is false. | boolean<br/>false |
| disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false | | disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
-| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/>
-| maxAjaxCallsPerView | Default 500 - controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 |
+| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated by using the navigation timing API. |boolean<br/>
+| maxAjaxCallsPerView | Default 500 controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 |
| disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
-| disable&#8203;CorrelationHeaders | If false, the SDK will add two headers ('Request-Id' and 'Request-Context') to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false |
-| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains | string[]<br/>undefined |
-| correlationHeader&#8203;ExcludePatterns | Disable correlation headers using regular expressions | regex[]<br/>undefined |
-| correlationHeader&#8203;Domains | Enable correlation headers for specific domains | string[]<br/>undefined |
-| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when onBeforeUnload event triggers | boolean<br/> false |
-| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load | boolean<br />true |
-| cookieCfg | Defaults to cookie usage enabled see [ICookieCfgConfig](#icookiemgrconfig) settings for full defaults. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
-| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. isCookieUseDisable is deprecated in favor of disableCookiesUsage, when both are provided disableCookiesUsage takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined it will take precedence over these values, Cookie usage can be re-enabled after initialization via the core.getCookieMgr().setEnabled(true). | alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
-| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined it will take precedence over this value. | alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
-| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined it will take precedence over this value. | alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
-| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected) | boolean<br/>false |
+| disable&#8203;CorrelationHeaders | If false, the SDK will add two headers (`Request-Id` and `Request-Context`) to all dependency requests to correlate them with corresponding requests on the server side. | boolean<br/> false |
+| correlationHeader&#8203;ExcludedDomains | Disable correlation headers for specific domains. | string[]<br/>undefined |
+| correlationHeader&#8203;ExcludePatterns | Disable correlation headers by using regular expressions. | regex[]<br/>undefined |
+| correlationHeader&#8203;Domains | Enable correlation headers for specific domains. | string[]<br/>undefined |
+| disableFlush&#8203;OnBeforeUnload | If true, flush method won't be called when `onBeforeUnload` event triggers. | boolean<br/> false |
+| enableSessionStorageBuffer | If true, the buffer with all unsent telemetry is stored in session storage. The buffer is restored on page load. | boolean<br />true |
+| cookieCfg | Defaults to cookie usage enabled. For full defaults, see [ICookieCfgConfig](#icookiemgrconfig) settings. | [ICookieCfgConfig](#icookiemgrconfig)<br>(Since 2.6.0)<br/>undefined |
+| ~~isCookieUseDisabled~~<br>disableCookiesUsage | If true, the SDK won't store or read any data from cookies. Disables the User and Session cookies and renders the usage blades and experiences useless. `isCookieUseDisable` is deprecated in favor of `disableCookiesUsage`. When both are provided, `disableCookiesUsage` takes precedence.<br>(Since v2.6.0) And if `cookieCfg.enabled` is also defined, it will take precedence over these values. Cookie usage can be re-enabled after initialization via `core.getCookieMgr().setEnabled(true)`. | Alias for [`cookieCfg.enabled`](#icookiemgrconfig)<br>false |
+| cookieDomain | Custom cookie domain. This option is helpful if you want to share Application Insights cookies across subdomains.<br>(Since v2.6.0) If `cookieCfg.domain` is defined, it will take precedence over this value. | Alias for [`cookieCfg.domain`](#icookiemgrconfig)<br>null |
+| cookiePath | Custom cookie path. This option is helpful if you want to share Application Insights cookies behind an application gateway.<br>If `cookieCfg.path` is defined, it will take precedence over this value. | Alias for [`cookieCfg.path`](#icookiemgrconfig)<br>(Since 2.6.0)<br/>null |
+| isRetryDisabled | If false, retry on 206 (partial success), 408 (timeout), 429 (too many requests), 500 (internal server error), 503 (service unavailable), and 0 (offline, only if detected). | boolean<br/>false |
| isStorageUseDisabled | If true, the SDK won't store or read any data from local and session storage. | boolean<br/> false |
-| isBeaconApiDisabled | If false, the SDK will send all telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/>true |
-| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry using the [Beacon API](https://www.w3.org/TR/beacon) | boolean<br/> false |
-| sdkExtension | Sets the sdk extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the 'ai.internal.sdkVersion' tag (for example, 'ext_javascript:2.0.0'). | string<br/> null |
-| isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [Browser Link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
-| appId | AppId is used for the correlation between AJAX dependencies happening on the client-side with the server-side requests. When Beacon API is enabled, it canΓÇÖt be used automatically, but can be set manually in the configuration. |string<br/> null |
-| enable&#8203;CorsCorrelation | If true, the SDK will add two headers ('Request-Id' and 'Request-Context') to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false |
+| isBeaconApiDisabled | If false, the SDK will send all telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/>true |
+| onunloadDisableBeacon | When tab is closed, the SDK will send all remaining telemetry by using the [Beacon API](https://www.w3.org/TR/beacon). | boolean<br/> false |
+| sdkExtension | Sets the SDK extension name. Only alphabetic characters are allowed. The extension name is added as a prefix to the `ai.internal.sdkVersion` tag (for example, `ext_javascript:2.0.0`). | string<br/> null |
+| isBrowserLink&#8203;TrackingEnabled | If true, the SDK will track all [browser link](/aspnet/core/client-side/using-browserlink) requests. | boolean<br/>false |
+| appId | AppId is used for the correlation between AJAX dependencies happening on the client side with the server-side requests. When the Beacon API is enabled, it canΓÇÖt be used automatically but can be set manually in the configuration. |string<br/> null |
+| enable&#8203;CorsCorrelation | If true, the SDK will add two headers (`Request-Id` and `Request-Context`) to all CORS requests to correlate outgoing AJAX dependencies with corresponding requests on the server side. | boolean<br/>false |
| namePrefix | An optional value that will be used as name postfix for localStorage and cookie name. | string<br/>undefined |
-| enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in Single Page Applications (SPA). If true, each route change will send a new Pageview to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
-| enableRequest&#8203;HeaderTracking | If true, AJAX & Fetch request headers is tracked. | boolean<br/> false |
-| enableResponse&#8203;HeaderTracking | If true, AJAX & Fetch request's response headers is tracked. | boolean<br/> false |
-| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for back-compatibility with any legacy Application Insights instrumented services. See example [here](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` |
+| enable&#8203;AutoRoute&#8203;Tracking | Automatically track route changes in single-page applications. If true, each route change will send a new page view to Application Insights. Hash route changes (`example.com/foo#bar`) are also recorded as new page views.| boolean<br/>false |
+| enableRequest&#8203;HeaderTracking | If true, AJAX and Fetch request headers are tracked. | boolean<br/> false |
+| enableResponse&#8203;HeaderTracking | If true, AJAX and Fetch request response headers are tracked. | boolean<br/> false |
+| distributedTracingMode | Sets the distributed tracing mode. If AI_AND_W3C mode or W3C mode is set, W3C trace context headers (traceparent/tracestate) will be generated and included in all outgoing requests. AI_AND_W3C is provided for backward compatibility with any legacy Application Insights instrumented services. See examples at [this website](./correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).| `DistributedTracingModes`or<br/>numeric<br/>(Since v2.6.0) `DistributedTracingModes.AI_AND_W3C`<br />(v2.5.11 or earlier) `DistributedTracingModes.AI` |
| enable&#8203;AjaxErrorStatusText | If true, include response error data text in dependency event on failed AJAX requests. | boolean<br/> false | | enable&#8203;AjaxPerfTracking |Flag to enable looking up and including more browser window.performance timings in the reported `ajax` (XHR and fetch) reported metrics. | boolean<br/> false |
-| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings (if available). This option is sometimes required as not all browsers populate the window.performance before reporting the end of the XHR request and for fetch requests this is added after its complete.| numeric<br/> 3 |
-| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request, time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
-| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When disableExceptionTracking is true (don't track exceptions), the config value will be ignored and unhandled promise rejections won't be reported. | boolean<br/> false |
-| enablePerfMgr | When enabled (true) this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More details are available by the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
-| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent() this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for 'parent' events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
-| idLength | The default length used to generate new random session and user ID values. Defaults to 22, previous default value was 5 (v2.5.8 or less), if you need to keep the previous maximum length you should set this value to 5. | numeric<br />22 |
+| maxAjaxPerf&#8203;LookupAttempts | The maximum number of times to look for the window.performance timings, if available. This option is sometimes required because not all browsers populate the window.performance before reporting the end of the XHR request. For fetch requests, this is added after it's complete.| numeric<br/> 3 |
+| ajaxPerfLookupDelay | The amount of time to wait before reattempting to find the window.performance timings for an `ajax` request. Time is in milliseconds and is passed directly to setTimeout(). | numeric<br/> 25 ms |
+| enableUnhandled&#8203;PromiseRejection&#8203;Tracking | If true, unhandled promise rejections will be autocollected and reported as a JavaScript error. When `disableExceptionTracking` is true (don't track exceptions), the config value will be ignored, and unhandled promise rejections won't be reported. | boolean<br/> false |
+| enablePerfMgr | When enabled (true), this will create local perfEvents for code that has been instrumented to emit perfEvents (via the doPerf() helper). This option can be used to identify performance issues within the SDK based on your usage or optionally within your own instrumented code. [More information is available in the basic documentation](https://github.com/microsoft/ApplicationInsights-JS/blob/master/docs/PerformanceMonitoring.md). Since v2.5.7 | boolean<br/>false |
+| perfEvtsSendAll | When _enablePerfMgr_ is enabled and the [IPerfManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfManager.ts) fires a [INotificationManager](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/INotificationManager.ts).perfEvent(), this flag determines whether an event is fired (and sent to all listeners) for all events (true) or only for parent events (false &lt;default&gt;).<br />A parent [IPerfEvent](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/IPerfEvent.ts) is an event where no other IPerfEvent is still running at the point of this event being created, and its _parent_ property isn't null or undefined. Since v2.5.7 | boolean<br />false |
+| idLength | The default length used to generate new random session and user ID values. Defaults to 22. The previous default value was 5 (v2.5.8 or less). If you need to keep the previous maximum length, you should set this value to 5. | numeric<br />22 |
-## Cookie Handling
+## Cookie handling
From version 2.6.0, cookie management is now available directly from the instance and can be disabled and re-enabled after initialization. If disabled during initialization via the `disableCookiesUsage` or `cookieCfg.enabled` configurations, you can now re-enable via the [ICookieMgr](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts) `setEnabled` function.
-The instance based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. And to benefit from the tree-shaking enhancements also introduced as part of version 2.6.0 you should no longer uses the global functions.
+The instance-based cookie management also replaces the previous CoreUtils global functions of `disableCookies()`, `setCookie(...)`, `getCookie(...)` and `deleteCookie(...)`. To benefit from the tree-shaking enhancements also introduced as part of version 2.6.0, you should no longer use the global functions.
### ICookieMgrConfig
-Cookie Configuration for instance-based cookie management added in version 2.6.0.
+Cookie configuration for instance-based cookie management added in version 2.6.0.
-| Name | Description | Type and Default |
+| Name | Description | Type and default |
||-||
-| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies | boolean<br/> true |
-| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided uses the value from root `cookieDomain` value. | string<br/>null |
-| path | Specifies the path to use for the cookie, if not provided it will use any value from the root `cookiePath` value. | string <br/> / |
-| getCookie | Function to fetch the named cookie value, if not provided it will use the internal cookie parsing / caching. | `(name: string) => string` <br/> null |
-| setCookie | Function to set the named cookie with the specified value, only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
-| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided it will use the internal cookie parsing / caching. | `(name: string, value: string) => void` <br/> null |
+| enabled | A boolean that indicates whether the use of cookies by the SDK is enabled by the current instance. If false, the instance of the SDK initialized by this configuration won't store or read any data from cookies. | boolean<br/> true |
+| domain | Custom cookie domain, which is helpful if you want to share Application Insights cookies across subdomains. If not provided, uses the value from root `cookieDomain` value. | string<br/>null |
+| path | Specifies the path to use for the cookie. If not provided, it will use any value from the root `cookiePath` value. | string <br/> / |
+| getCookie | Function to fetch the named cookie value. If not provided, it will use the internal cookie parsing/caching. | `(name: string) => string` <br/> null |
+| setCookie | Function to set the named cookie with the specified value. Only called when adding or updating a cookie. | `(name: string, value: string) => void` <br/> null |
+| delCookie | Function to delete the named cookie with the specified value, separated from setCookie to avoid the need to parse the value to determine whether the cookie is being added or removed. If not provided, it will use the internal cookie parsing/caching. | `(name: string, value: string) => void` <br/> null |
-### Simplified Usage of new instance Cookie Manager
+### Simplified usage of new instance Cookie Manager
- appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).setEnabled(true/false); - appInsights.[getCookieMgr()](https://github.com/microsoft/ApplicationInsights-JS/blob/master/shared/AppInsightsCore/src/JavaScriptSDK.Interfaces/ICookieMgr.ts).set("MyCookie", "the%20encoded%20value");
Cookie Configuration for instance-based cookie management added in version 2.6.0
## Enable time-on-page tracking
-By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
+By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new page view, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a log-based metric.
-## Enable Distributed Tracing
+## Enable distributed tracing
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. The following examples show standard configuration options for enabling correlation.
+In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. The following examples show standard configuration options for enabling correlation.
-The following sample code shows the configurations required to enable correlation:
+The following sample code shows the configurations required to enable correlation.
# [Snippet](#tab/snippet)
const appInsights = new ApplicationInsights({ config: { // Application Insights
> [!NOTE]
-> There are two distributed tracing modes/protocols - AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt-in to WC3 mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
+> There are two distributed tracing modes/protocols: AI (Classic) and [W3C TraceContext](https://www.w3.org/TR/trace-context/) (New). In version 2.6.0 and later, they are _both_ enabled by default. For older versions, users need to [explicitly opt in to WC3 mode](../app/correlation.md#enable-w3c-distributed-tracing-support-for-web-apps).
### Route tracking
-By default, this SDK will **not** handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration.
+By default, this SDK will *not* handle state-based route changing that occurs in single page applications. To enable automatic route change tracking for your single page application, you can add `enableAutoRouteTracking: true` to your setup configuration.
-### Single Page Applications
+### Single-page applications
-For Single Page Applications, reference plugin documentation for plugin specific guidance.
+For single-page applications, reference plug-in documentation for guidance specific to plug-ins.
-| Plugins |
+| Plug-ins |
|| | [React](javascript-react-plugin.md#enable-correlation)| | [React Native](javascript-react-native-plugin.md#enable-correlation)| | [Angular](javascript-angular-plugin.md#enable-correlation)| | [Click Analytics Auto-collection](javascript-click-analytics-plugin.md#enable-correlation)|
-### Advanced Correlation
+### Advanced correlation
-When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the Operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
-To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the Operation ID from the `serverside` and populate the `clientside` with it.
+When a page is first loading and the SDK hasn't fully initialized, we're unable to generate the operation ID for the first request. As a result, distributed tracing is incomplete until the SDK fully initializes.
+To remedy this problem, you can include dynamic JavaScript on the returned HTML page. The SDK will use a callback function during initialization to retroactively pull the operation ID from the `serverside` and populate the `clientside` with it.
# [Snippet](#tab/snippet)
-Here's a sample of how to create a dynamic JS using Razor:
+Here's a sample of how to create a dynamic JavaScript using Razor.
```C# <script>
Here's a sample of how to create a dynamic JS using Razor:
}}); </script> ```+ # [npm](#tab/npm) ```js
appInsights.context.telemetryContext.parentID = serverId;
appInsights.loadAppInsights(); ```
-When using an npm based configuration, a location must be determined to store the Operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
+When you use an npm-based configuration, a location must be determined to store the operation ID to enable access for the SDK initialization bundle to `appInsights.context.telemetryContext.parentID` so it can populate it before the first page view event is sent.
> [!CAUTION]
->The application UX is not yet optimized to show these "first hop" advanced distributed tracing scenarios. However, the data will be available in the requests table for query and diagnostics.
+>The application UX is not yet optimized to show these "first hop" advanced distributed tracing scenarios. The data will be available in the requests table for query and diagnostics.
## Extensions
When using an npm based configuration, a location must be determined to store th
## Explore browser/client-side data
-Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in:
+Browser/client-side data can be viewed by going to **Metrics** and adding individual metrics you're interested in.
-![Screenshot of the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
+![Screenshot that shows the Metrics page in Application Insights showing graphic displays of metrics data for a web application.](./media/javascript/page-view-load-time.png)
-You can also view your data from the JavaScript SDK via the Browser experience in the portal.
+You can also view your data from the JavaScript SDK via the browser experience in the portal.
-Select **Browser** and then choose **Failures** or **Performance**.
+Select **Browser**, and then select **Failures** or **Performance**.
-![Screenshot of the Browser page in Application Insights showing how to add Browser Failures or Browser Performance to the metrics that you can view for your web application.](./media/javascript/browser.png)
+![Screenshot that shows the Browser page in Application Insights showing how to add Browser Failures or Browser Performance to the metrics that you can view for your web application.](./media/javascript/browser.png)
### Performance
-![Screenshot of the Performance page in Application Insights showing graphic displays of Operations metrics for a web application.](./media/javascript/performance-operations.png)
+![Screenshot that shows the Performance page in Application Insights showing graphic displays of Operations metrics for a web application.](./media/javascript/performance-operations.png)
### Dependencies
-![Screenshot of the Performance page in Application Insights showing graphic displays of Dependency metrics for a web application.](./media/javascript/performance-dependencies.png)
+![Screenshot that shows the Performance page in Application Insights showing graphic displays of Dependency metrics for a web application.](./media/javascript/performance-dependencies.png)
### Analytics
-To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK and any server-side telemetry collected by other SDKs will be excluded.
-
+To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you'll only see data from the JavaScript SDK. Any server-side telemetry collected by other SDKs will be excluded.
+ ```kusto // average pageView duration by name let timeGrain=5m;
dataset
| render timechart ```
-### Source Map Support
+### Source map support
The minified callstack of your exception telemetry can be unminified in the Azure portal. All existing integrations on the Exception Details panel will work with the newly unminified callstack.
-#### Link to Blob storage account
+#### Link to Blob Storage account
-You can link your Application Insights resource to your own Azure Blob Storage container to automatically unminify call stacks. To get started, see [automatic source map support](./source-map-support.md).
+You can link your Application Insights resource to your own Azure Blob Storage container to automatically unminify call stacks. To get started, see [Automatic source map support](./source-map-support.md).
### Drag and drop
-1. Select an Exception Telemetry item in the Azure portal to view its "End-to-end transaction details"
-2. Identify which source maps correspond to this call stack. The source map must match a stack frame's source file, but suffixed with `.map`
-3. Drag and drop the source maps onto the call stack in the Azure portal
-![An animated image showing how to drag and drop source map files from a build folder into the Call Stack window in the Azure portal.](https://i.imgur.com/Efue9nU.gif)
+1. Select an Exception Telemetry item in the Azure portal to view its "end-to-end transaction details."
+1. Identify which source maps correspond to this call stack. The source map must match a stack frame's source file but be suffixed with `.map`.
+1. Drag the source maps onto the call stack in the Azure portal.
+
+ ![An animated image showing how to drag source map files from a build folder into the Call Stack window in the Azure portal.](https://i.imgur.com/Efue9nU.gif)
-### Application Insights Web Basic
+### Application Insights web basic
+
+For a lightweight experience, you can instead install the basic version of Application Insights:
-For a lightweight experience, you can instead install the basic version of Application Insights
``` npm i --save @microsoft/applicationinsights-web-basic ```
-This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection (uncaught exceptions, AJAX, etc.). The APIs to send certain telemetry types, like `trackTrace`, `trackException`, etc., aren't included in this version, so you'll need to provide your own wrapper. The only API that is available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
+
+This version comes with the bare minimum number of features and functionalities and relies on you to build it up as you see fit. For example, it performs no autocollection like uncaught exceptions and AJAX. The APIs to send certain telemetry types, like `trackTrace` and `trackException`, aren't included in this version. For this reason, you'll need to provide your own wrapper. The only API that's available is `track`. A [sample](https://github.com/Azure-Samples/applicationinsights-web-sample1/blob/master/testlightsku.html) is located here.
## Examples
-For runnable examples, see [Application Insights JavaScript SDK Samples](https://github.com/Azure-Samples?q=applicationinsights-js-demo).
+For runnable examples, see [Application Insights JavaScript SDK samples](https://github.com/Azure-Samples?q=applicationinsights-js-demo).
-## Upgrading from the old version of Application Insights
+## Upgrade from the old version of Application Insights
Breaking changes in the SDK V2 version:+ - To allow for better API signatures, some of the API calls, such as trackPageView and trackException, have been updated. Running in Internet Explorer 8 and earlier versions of the browser isn't supported. - The telemetry envelope has field name and structure changes due to data schema updates. - Moved `context.operation` to `context.telemetryTrace`. Some fields were also changed (`operation.id` --> `telemetryTrace.traceID`).
- - To manually refresh the current pageview ID (for example, in SPA apps), use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
- > [!NOTE]
- > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID.
+
+ To manually refresh the current pageview ID, for example, in single-page applications, use `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`.
+
+ > [!NOTE]
+ > To keep the trace ID unique, where you previously used `Util.newId()`, now use `Util.generateW3CId()`. Both ultimately end up being the operation ID.
If you're using the current application insights PRODUCTION SDK (1.0.20) and want to see if the new SDK works in runtime, update the URL depending on your current SDK loading scenario.
If you're using the current application insights PRODUCTION SDK (1.0.20) and wan
}); ```
-Test in internal environment to verify monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK V2 version and deploy in your production environments.
+Test in an internal environment to verify the monitoring telemetry is working as expected. If all works, update your API signatures appropriately to SDK v2 and deploy in your production environments.
## SDK performance/overhead
-At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of loadtime to your website. Minimal components of the library are quickly loaded when using this snippet. In the meantime, the full script is downloaded in the background.
+At just 36 KB gzipped, and taking only ~15 ms to initialize, Application Insights adds a negligible amount of load time to your website. Minimal components of the library are quickly loaded when you use this snippet. In the meantime, the full script is downloaded in the background.
-While the script is downloading from the CDN, all tracking of your page is queued. Once the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system, invisible to your users.
+While the script is downloading from the CDN, all tracking of your page is queued. After the downloaded script finishes asynchronously initializing, all events that were queued are tracked. As a result, you won't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system that's invisible to your users.
> Summary: > - ![npm version](https://badge.fury.io/js/%40microsoft%2Fapplicationinsights-web.svg)
While the script is downloading from the CDN, all tracking of your page is queue
| | | | | Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
-## ES3/IE8 Compatibility
+## ES3/Internet Explorer 8 compatibility
-As such we need to ensure that this SDK continues to "work" and doesn't break the JS execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers canΓÇÖt control which browser their end users choose to use.
+We need to ensure that this SDK continues to "work" and doesn't break the JavaScript execution when it's loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers can't control which browser their users choose to use.
-This statement does NOT mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility and when adding new features, they'll need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
+This statement does *not* mean that we'll only support the lowest common set of features. We need to maintain ES3 code compatibility. New features will need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
-[See GitHub for full details on IE8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility)
+See GitHub for full details on [Internet Explorer 8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility).
## Open-source SDK
-The Application Insights JavaScript SDK is open-source to view the source code or to contribute to the project visit the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
+The Application Insights JavaScript SDK is open source. To view the source code or to contribute to the project, see the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
For the latest updates and bug fixes, [consult the release notes](./release-notes.md). ## Troubleshooting
+This section helps you troubleshoot common issues.
+ ### I'm getting an error message of Failed to get Request-Context correlation header as it may be not included in the response or not accessible
-The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent due to third-party server configuration. This property supports wildcards.
-An example would be `*.queue.core.windows.net`, as seen in the code sample above.
-Adding the application domain to this property should be avoided as it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context` and `traceparent` headers as part of the request.
+The `correlationHeaderExcludedDomains` configuration property is an exclude list that disables correlation headers for specific domains. This option is useful when including those headers would cause the request to fail or not be sent because of third-party server configuration. This property supports wildcards.
+An example would be `*.queue.core.windows.net`, as seen in the preceding code sample.
+Adding the application domain to this property should be avoided because it stops the SDK from including the required distributed tracing `Request-Id`, `Request-Context`, and `traceparent` headers as part of the request.
### I'm not sure how to update my third-party server configuration
-The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context` and `traceparent` (W3C distributed header).
+The server side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server side, it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context`, and `traceparent` (W3C distributed header).
Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>` ### I'm receiving duplicate telemetry data from the Application Insights JavaScript SDK
-If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when using connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
+If the SDK reports correlation recursively, enable the configuration setting of `excludeRequestFromAutoTrackingPatterns` to exclude the duplicate data. This scenario can occur when you use connection strings. The syntax for the configuration setting is `excludeRequestFromAutoTrackingPatterns: [<endpointUrl>]`.
## <a name="next"></a> Next steps+ * [Source map for JavaScript](source-map-support.md) * [Track usage](usage-overview.md) * [Custom events and metrics](api-custom-events-metrics.md)
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md
# Monitor availability with URL ping tests
-The name *URL ping test* is a bit of a misnomer. These tests don't use Internet Control Message Protocol (ICMP) to check your site's availability. Instead, they use more advanced HTTP request functionality to validate whether an endpoint is responding. They measure the performance associated with that response. They also add the ability to set custom success criteria, coupled with more advanced features like parsing dependent requests and allowing for retries.
+The name *URL ping test* is a bit of a misnomer. These tests don't use the Internet Control Message Protocol (ICMP) to check your site's availability. Instead, they use more advanced HTTP request functionality to validate whether an endpoint is responding. They measure the performance associated with that response. They also add the ability to set custom success criteria, coupled with more advanced features like parsing dependent requests and allowing for retries.
-To create an availability test, you need use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md).
+To create an availability test, you need to use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md).
> [!NOTE] > URL ping tests are categorized as classic tests. You can find them under **Add Classic Test** on the **Availability** pane. For more advanced features, see [Standard tests](availability-standard-tests.md).
-
+ ## Create a test To create your first availability request:
-1. In your Application Insights resource, open the **Availability** pane and selectΓÇ» **Add Classic Test**.
+
+1. In your Application Insights resource, open the **Availability** pane and selectΓÇ»**Add Classic Test**.
:::image type="content" source="./media/monitor-web-app-availability/create-test.png" alt-text="Screenshot that shows the Availability pane and the button for adding a classic test." lightbox ="./media/monitor-web-app-availability/create-test.png":::+ 1. Name your test and select **URL ping** for **SKU**. 1. Enter the URL that you want to test.
-1. Adjust the settings (described in the following table) to your needs and select **Create**.
+1. Adjust the settings to your needs by using the following table. Select **Create**.
- |Setting| Explanation |
+ |Setting| Description |
|-|-|
- |**URL** | The URL can be any webpage that you want to test, but it must be visible from the public internet. The URL can include a query string. For example, you can exercise your database a little. If the URL resolves to a redirect, you can follow it up to 10 redirects.|
- |**Parse dependent requests**| The test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option is not enabled, the test only requests the file at the URL that you specified. Enabling this option results in a stricter check. The test might fail for cases that aren't noticeable from manually browsing through the site.
- |**Enable retries**|When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80 percent of failures disappear on retry.|
- |**Test frequency**| This setting determines how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested every minute on average.|
- |**Test locations**| The values for this setting are the places from which servers send web requests to your URL. *We recommend a minimum of five test locations*, to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
+ |URL |The URL can be any webpage that you want to test, but it must be visible from the public internet. The URL can include a query string. For example, you can exercise your database a little. If the URL resolves to a redirect, you can follow it up to 10 redirects.|
+ |Parse dependent requests| The test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't enabled, the test only requests the file at the URL that you specified. Enabling this option results in a stricter check. The test might fail for cases that aren't noticeable from manually browsing through the site.
+ |Enable retries|When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80 percent of failures disappear on retry.|
+ |Test frequency| This setting determines how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested every minute on average.|
+ |Test locations| The values for this setting are the places from which servers send web requests to your URL. *We recommend a minimum of 5 test locations* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.
If your URL isn't visible from the public internet, you can choose to selectively open your firewall to allow only the test transactions through. To learn more about the firewall exceptions for availability test agents, consult the [IP address guide](./ip-addresses.md#availability-tests).
If your URL isn't visible from the public internet, you can choose to selectivel
## Success criteria
-|Setting| Explanation |
+|Setting| Description |
|-|-|
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site have not been received within this period. If you selected **Parse dependent requests**, then all the images, style files, scripts, and other dependent resources must have been received within this period.|
-| **HTTP response** | The returned status code that's counted as a success. The code that indicates that a normal webpage has been returned is 200.|
-| **Content match** | We test that an exact case-sensitive match for a string occurs in every response. It must be a plain string, without wildcards (like "Welcome!"). Don't forget that if your page content changes, you might have to update it. *Content match supports only English characters.* |
+| Test timeout |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.|
+| HTTP response | The returned status code that's counted as a success. The code that indicates that a normal webpage has been returned is 200.|
+| Content match | We test that an exact case-sensitive match for a string occurs in every response. It must be a plain string, without wildcards (like "Welcome!"). Don't forget that if your page content changes, you might have to update it. *Content match supports only English characters.* |
## Alerts
-|Setting| Explanation |
+|Setting| Description |
|-|-|
-|**Near-realtime (Preview)** | We recommend using alerts that work in near real time. You configure this type of alert after you create your availability test. |
-|**Alert location threshold**| The optimal relationship between alert location threshold and the number of test locations is *alert location threshold = number of test locations - 2*, with a minimum of five test locations.|
+|Near real time (preview) | We recommend using alerts that work in near real time. You configure this type of alert after you create your availability test. |
+|Alert location threshold| The optimal relationship between alert location threshold and the number of test locations is *alert location threshold = number of test locations - 2*, with a minimum of five test locations.|
## Location population tags
You might want to disable availability tests or the alert rules associated with
Select a red dot. From an availability test result, you can see the transaction details across all components. You can then:
In addition to the raw results, you can view two key availability metrics in [Me
* [Use PowerShell scripts to set up an availability test](./powershell.md#add-an-availability-test) automatically. * Set up a [webhook](../alerts/alerts-webhooks.md) that's called when an alert is raised. - ## Next steps * [Availability alerts](availability-alerts.md)
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
# Overview of autoscale in Microsoft Azure+ This article describes Microsoft Azure autoscale and its benefits. Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](#supported-services-for-autoscale).
Azure autoscale supports many resource types. For more information about support
> [!NOTE] > [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) for faster and more reliable autoscale support. - ## What is autoscale
-Autoscale is a service that allows you to automatically add and remove resources according to the load on your application.
+
+Autoscale is a service that allows you to automatically add and remove resources according to the load on your application.
When your application experiences higher load, autoscale adds resources to handle the increased load. When load is low, autoscale reduces the number of resources, lowering your costs. You can scale your application based on metrics like CPU usage, queue length, and available memory, or based on a schedule. Metrics and schedules are set up in rules. The rules include a minimum level of resources that you need to run your application, and a maximum level of resources that won't be exceeded. For example, scale out your application by adding VMs when the average CPU usage per VM is above 70%. Scale it back in removing VMs when CPU usage drops to 40%.
- ![Autoscale explained. Add and remove VMs](./media/autoscale-overview/AutoscaleConcept.png)
+
+When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems.
+
+## Scaling out and scaling up
+
+Autoscale scales in and out, which is an increase, or decrease of the number of resource instances. Scaling in and out is also called horizontal scaling. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+
+In contrast, scaling up and down, or vertical scaling, keeps the number of resources constant, but gives those resources more capacity in terms of memory, CPU speed, disk space and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
+
-When the conditions in the rules are met, one or more autoscale actions are triggered, adding or removing VMs. In addition, you can perform other actions like sending email notifications, or webhooks to trigger processes in other systems..
### Predictive autoscale (preview)+ [Predictive autoscale](/azure/azure-monitor/autoscale/autoscale-predictive) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.+ ## Autoscale setup+ You can set up autoscale via:
-* [Azure portal](autoscale-get-started.md)
-* [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)
-* [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)
-* [Azure Monitor REST API](/rest/api/monitor/autoscalesettings)
+++ [Azure portal](autoscale-get-started.md)++ [PowerShell](../powershell-samples.md#create-and-manage-autoscale-settings)++ [Cross-platform Command Line Interface (CLI)](../cli-samples.md#autoscale)++ [Azure Monitor REST API](/rest/api/monitor/autoscalesettings) ## Architecture+ The following diagram shows the autoscale architecture. ![Autoscale Flow Diagram](./media/autoscale-overview/Autoscale_Overview_v4.png) ### Resource metrics
-Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure.
+
+Resources generate metrics that are used in autoscale rules to trigger scale events. Virtual machine scale sets use telemetry data from Azure diagnostics agents to generate metrics. Telemetry for Web apps and Cloud services comes directly from the Azure Infrastructure.
Some commonly used metrics include CPU usage, memory usage, thread counts, queue length, and disk usage. See [Autoscale Common Metrics](autoscale-common-metrics.md) for a list of available metrics. ### Custom metrics+ Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](/azure/azure-monitor/app/app-insights-overview) so you can use those metrics decide when to scale. ### Time
-Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs.
-
+
+Set up schedule-based rules to trigger scale events. Use schedule-based rules when you see time patterns in your load, and want to scale before an anticipated change in load occurs.
### Rules+ Rules define the conditions needed to trigger a scale event, the direction of the scaling, and the amount to scale by. Rules can be:
-* Metric-based
-Trigger based on a metric value, for example when CPU usage is above 50%.
-* Time-based
-Trigger based on a schedule, for example, every Saturday at 8am.
++ Metric-based
+ Trigger based on a metric value, for example when CPU usage is above 50%.
++ Time-based
+ Trigger based on a schedule, for example, every Saturday at 8am.
You can combine multiple rules using different metrics, for example CPU usage and queue length.
-* The OR operator is used when scaling out with multiple rules.
-* The AND operator is used when scaling in with multiple rules.
+++ The OR operator is used when scaling out with multiple rules.++ The AND operator is used when scaling in with multiple rules. ### Actions and automation+ Rules can trigger one or more actions. Actions include: -- Scale - Scale resources in or out.-- Email - Send an email to the subscription admins, co-admins, and/or any other email address.-- Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
- + Start an [Azure Automation runbook](/azure/automation/overview).
- + Call an [Azure Function](/azure/azure-functions/functions-overview).
- + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
++ Scale - Scale resources in or out.++ Email - Send an email to the subscription admins, co-admins, and/or any other email address.++ Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
+ + Start an [Azure Automation runbook](/azure/automation/overview).
+ + Call an [Azure Function](/azure/azure-functions/functions-overview).
+ + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
+ ## Autoscale settings Autoscale settings contain the autoscale configuration. The setting including scale conditions that define rules, limits, and schedules and notifications. Define one or more scale conditions in the settings, and one notification setup.
Autoscale uses the following terminology and structure. The UI and JSON
| UI | JSON/CLI | Description | ||--|-| | Scale conditions | profiles | A collection of rules, instance limits and schedules, based on a metric or time. You can define one or more scale conditions or profiles. |
-| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale in and scale out actions. |
+| Rules | rules | A set of time or metric-based conditions that trigger a scale action. You can define one or more rules for both scale-in and scale-out actions. |
| Instance limits | capacity | Each scale condition or profile defines th default, max, and min number of instances that can run under that profile. | | Schedule | recurrence | Indicates when autoscale should put this scale condition or profile into effect. You can have multiple scale conditions, which allow you to handle different and overlapping requirements. For example, you can have different scale conditions for different times of day, or days of the week. | | Notify | notification | Defines the notifications to send when an autoscale event occurs. Autoscale can notify one or more email addresses or make a call one or more webhooks. You can configure multiple webhooks in the JSON but only one in the UI. |
The full list of configurable fields and descriptions is available in the [Autos
For code examples, see
-* [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
++ [Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) ++ [Autoscale REST API](/rest/api/monitor/autoscalesettings) ## Horizontal vs vertical scaling
-Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
-
-In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
+Autoscale scales horizontally, which is an increase, or decrease of the number of resource instances. For example, in a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation as it allows you to run a large number of VMs to handle load.
+In contrast, vertical scaling, keeps the same number of resources constant, but gives them more capacity in terms of memory, CPU speed, disk space and network. Adding or removing capacity in vertical scaling is known as scaling or down. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling may also require a restart of the virtual machine during the scaling process.
## Supported services for autoscale+ The following services are supported by autoscale: | Service | Schema & Documentation | | | |
-| Web Apps |[Scaling Web Apps](autoscale-get-started.md) |
-| Cloud Services |[Autoscale a Cloud Service](../../cloud-services/cloud-services-how-to-scale-portal.md) |
-| Virtual Machines: Windows scale sets |[Scaling virtual machine scale sets in Windows](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md) |
-| Virtual Machines: Linux scale sets |[Scaling virtual machine scale sets in Linux](../../virtual-machine-scale-sets/tutorial-autoscale-cli.md) |
-| Virtual Machines: Windows Example |[Advanced Autoscale configuration using Resource Manager templates for virtual machine scale sets](autoscale-virtual-machine-scale-sets.md) |
-| Azure App Service |[Scale up an app in Azure App service](../../app-service/manage-scale-up.md)|
-| API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md)
+| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview) |
+| Web apps |[Scaling Web Apps](autoscale-get-started.md) |
+| Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md)
| Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|
-| Logic Apps |[Adding integration service environment (ISE) capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity)|
-| Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
-| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
+| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
+| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
| Azure SignalR Service | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) |
+| Logic apps |[Adding integration service environment (ISE) capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity)|
| Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) |
-| Logic Apps - Integration Service Environment(ISE) | [Add ISE Environment](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
-| Azure App Service Environment | [Autoscaling and App Service Environment v1](../../app-service/environment/app-service-environment-auto-scale.md) |
+| Service Bus |[Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)|
+| Spring Cloud |[Set up autoscale for microservice applications](../../spring-apps/how-to-setup-autoscale.md)|
| Service Fabric Managed Clusters | [Introduction to Autoscaling on Service Fabric managed clusters](../../service-fabric/how-to-managed-cluster-autoscale.md) |
-| Azure Stream Analytics | [Autoscale streaming units (Preview)](../../stream-analytics/stream-analytics-autoscale.md) |
-| Azure Machine Learning Workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
- ## Next steps+ To learn more about autoscale, see the following resources:
-* [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)
-* [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
-* [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
-* [Best practices for Azure Monitor autoscale](autoscale-best-practices.md)
-* [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)
-* [Autoscale REST API](/rest/api/monitor/autoscalesettings)
-* [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-* [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot)
++ [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)++ [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)++ [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)++ [Best practices for Azure Monitor autoscale](autoscale-best-practices.md)++ [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md)++ [Autoscale REST API](/rest/api/monitor/autoscalesettings)++ [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)++ [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot)
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 08/12/2022 Last updated : 08/16/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## July 2022
+* [Azure Application Consistent Snapshot Tool (AzAcSnap) 6](azacsnap-release-notes.md)
+
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments. With AzAcSnap 6, there is a new [release model](azacsnap-release-notes.md). AzAcSnap 6 also introduces the following new capabilities:
+
+ Now generally available:
+ * Oracle Database support
+ * Backint integration to work with Azure Backup
+ * [RunBefore and RunAfter](azacsnap-cmd-ref-runbefore-runafter.md) CLI options to execute custom shell scripts and commands before or after taking storage snapshots
+
+ In preview:
+ * Azure Key Vault to store Service Principal content
+ * Azure Managed Disk as an alternate storage back end
+ * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can [Back up Azure NetApp Files datastores and VMs using Cloud Backup](../azure-vmware/backup-azure-netapp-files-datastores-vms.md). This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores. * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview)
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Publish service catalog managed app
-description: Shows how to create an Azure managed application that is intended for members of your organization.
+ Title: Publish Azure Managed Application in service catalog
+description: Describes how to publish an Azure Managed Application in your service catalog that's intended for members of your organization.
+ Previously updated : 07/08/2022- Last updated : 08/16/2022
-# Quickstart: Create and publish a managed application definition
+# Quickstart: Create and publish an Azure Managed Application definition
-This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You can create and publish a managed application that's intended for members of your organization.
+This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You create and publish a managed application that's stored in your service catalog and is intended for members of your organization.
-To publish a managed application to your service catalog, you must:
+To publish a managed application to your service catalog, do the following tasks:
- Create an Azure Resource Manager template (ARM template) that defines the resources to deploy with the managed application. - Define the user interface elements for the portal when deploying the managed application.-- Create a _.zip_ package that contains the required template files.
+- Create a _.zip_ package that contains the required template files. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.
- Decide which user, group, or application needs access to the resource group in the user's subscription. - Create the managed application definition that points to the _.zip_ package and requests access for the identity.
+**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition).
+ > [!NOTE] > Bicep files can't be used in a managed application. You must convert a Bicep file to ARM template JSON with the Bicep [build](../bicep/bicep-cli.md#build) command.
+## Prerequisites
+
+To complete this quickstart, you need the following items:
+
+- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+- Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+ ## Create the ARM template
-Every managed application definition includes a file named _mainTemplate.json_. In it, you define the Azure resources to deploy. The template is no different than a regular ARM template.
+Every managed application definition includes a file named _mainTemplate.json_. The template defines the Azure resources to deploy and is no different than a regular ARM template.
-Create a file named _mainTemplate.json_. The name is case-sensitive.
+Open Visual Studio Code, create a file with the case-sensitive name _mainTemplate.json_ and save it.
Add the following JSON and save the file. It defines the parameters for creating a storage account, and specifies the properties for the storage account.
Add the following JSON and save the file. It defines the parameters for creating
"contentVersion": "1.0.0.0", "parameters": { "storageAccountNamePrefix": {
- "type": "string"
+ "type": "string",
+ "maxLength": 11,
+ "metadata": {
+ "description": "Storage prefix must be maximum of 11 characters with only lowercase letters or numbers."
+ }
}, "storageAccountType": { "type": "string"
Add the following JSON and save the file. It defines the parameters for creating
As a publisher, you define the portal experience for creating the managed application. The _createUiDefinition.json_ file generates the portal interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) including drop-downs, text boxes, and password boxes.
-Create a file named _createUiDefinition.json_ (This name is case-sensitive)
+Open Visual Studio Code, create a file with the case-sensitive name _createUiDefinition.json_ and save it.
-Add the following starter JSON to the file and save it.
+Add the following JSON to the file and save it.
```json {
To learn more, see [Get started with CreateUiDefinition](create-uidefinition-ove
## Package the files
-Add the two files to a _.zip_ file named _app.zip_. The two files must be at the root level of the _.zip_ file. If you put them in a folder, you receive an error when creating the managed application definition that states the required files aren't present.
+Add the two files to a file named _app.zip_. The two files must be at the root level of the _.zip_ file. If you put the files in a folder, you receive an error that states the required files aren't present when you create the managed application definition.
-Upload the package to an accessible location from where it can be consumed. You'll need to provide a unique name for the storage account.
+Upload the package to an accessible location from where it can be consumed. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
New-AzResourceGroup -Name storageGroup -Location eastus
$storageAccount = New-AzStorageAccount ` -ResourceGroupName storageGroup `
- -Name "mystorageaccount" `
+ -Name "demostorageaccount" `
-Location eastus ` -SkuName Standard_LRS ` -Kind StorageV2
Set-AzStorageBlobContent `
az group create --name storageGroup --location eastus az storage account create \
- --name mystorageaccount \
+ --name demostorageaccount \
--resource-group storageGroup \ --location eastus \ --sku Standard_LRS \ --kind StorageV2 az storage container create \
- --account-name mystorageaccount \
+ --account-name demostorageaccount \
--name appcontainer \ --public-access blob az storage blob upload \
- --account-name mystorageaccount \
+ --account-name demostorageaccount \
--container-name appcontainer \ --name "app.zip" \
- --file "D:\myapplications\app.zip"
+ --file "./app.zip"
```
+When you run the Azure CLI command to create the container, you might see a warning message about credentials, but the command will be successful. The reason is because although you own the storage account you assign roles like _Storage Blob Data Contributor_ to the storage account scope. For more information, see [Assign an Azure role for access to blob data](../../storage/blobs/assign-azure-role-data-access.md). After you add a role, it takes a few minutes to become active in Azure. You can then append the command with `--auth-mode login` and resolve the warning message.
+ ## Create the managed application definition
+In this section you'll get identity information from Azure Active Directory, create a resource group, and create the managed application definition.
+ ### Create an Azure Active Directory user group or application The next step is to select a user group, user, or application for managing the resources for the customer. This identity has permissions on the managed resource group according to the role that is assigned. The role can be any Azure built-in role like Owner or Contributor. To create a new Active Directory user group, see [Create a group and add members in Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
groupid=$(az ad group show --group mygroup --query id --output tsv)
### Get the role definition ID
-Next, you need the role definition ID of the Azure built-in role you want to grant access to the user, user group, or application. Typically, you use the Owner or Contributor or Reader role. The following command shows how to get the role definition ID for the Owner role:
+Next, you need the role definition ID of the Azure built-in role you want to grant access to the user, user group, or application. Typically, you use the Owner, Contributor, or Reader role. The following command shows how to get the role definition ID for the Owner role:
# [PowerShell](#tab/azure-powershell)
roleid=$(az role definition list --name Owner --query [].name --output tsv)
### Create the managed application definition
-If you don't already have a resource group for storing your managed application definition, create one now:
+If you don't already have a resource group for storing your managed application definition, create a new resource group.
+
+**Optional**: If you want to deploy your managed application definition with an ARM template in your own storage account, see [bring your own storage](#bring-your-own-storage-for-the-managed-application-definition).
# [PowerShell](#tab/azure-powershell)
az group create --name appDefinitionGroup --location westcentralus
-Now, create the managed application definition resource.
+Create the managed application definition resource. In the `Name` parameter, replace the placeholder `demostorageaccount` with your unique storage account name.
# [PowerShell](#tab/azure-powershell)
New-AzManagedApplicationDefinition `
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-blob=$(az storage blob url --account-name mystorageaccount --container-name appcontainer --name app.zip --output tsv)
+blob=$(az storage blob url \
+ --account-name demostorageaccount \
+ --container-name appcontainer \
+ --name app.zip --output tsv)
az managedapp definition create \ --name "ManagedStorage" \
When the command completes, you have a managed application definition in your re
Some of the parameters used in the preceding example are: - **resource group**: The name of the resource group where the managed application definition is created.-- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
+- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, `ReadOnly` is the only supported lock level. When `ReadOnly` is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
- **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group. - **Azure PowerShell**: `"${groupid}:$roleid"` or you can use curly braces for each variable `"${groupid}:${roleid}"`. Use a comma to separate multiple values: `"${groupid1}:$roleid1", "${groupid2}:$roleid2"`. - **Azure CLI**: `"$groupid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$groupid1:$roleid1" "$groupid2:$roleid2"`. -- **package file URI**: The location of a _.zip_ package that contains the required files.
+- **package file URI**: The location of a _.zip_ package file that contains the required files.
## Bring your own storage for the managed application definition
-As an alternative, you can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs.
+This section is optional. You can store your managed application definition in your own storage account so that its location and access can be managed by you for your regulatory needs. The _.zip_ package file has a 120-MB limit for a service catalog's managed application definition.
> [!NOTE] > Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition.
-### Select your storage account
+### Create your storage account
-You must [create a storage account](../../storage/common/storage-account-create.md) to contain your managed application definition for use with Service Catalog.
+You must create a storage account that will contain your managed application definition for use with a service catalog. The storage account name must be globally unique across Azure and the length must be 3-24 characters with only lowercase letters and numbers.
-Copy the storage account's resource ID. It will be used later when deploying the definition.
+This example creates a new resource group named `byosStorageRG`. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
-### Set the role assignment for "Appliance Resource Provider" in your storage account
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name byosStorageRG -Location eastus
+
+New-AzStorageAccount `
+ -ResourceGroupName byosStorageRG `
+ -Name "definitionstorage" `
+ -Location eastus `
+ -SkuName Standard_LRS `
+ -Kind StorageV2
+```
+
+Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use this variable when you deploy the managed application definition.
+
+```azurepowershell-interactive
+$storageId = (Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage).Id
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group create --name byosStorageRG --location eastus
+
+az storage account create \
+ --name definitionstorage \
+ --resource-group byosStorageRG \
+ --location eastus \
+ --sku Standard_LRS \
+ --kind StorageV2
+```
+
+Use the following command to store the storage account's resource ID in a variable named `storageId`. You'll use the variable's value when you deploy the managed application definition.
+
+```azurecli-interactive
+storageId=$(az storage account show --resource-group byosStorageRG --name definitionstorage --query id)
+```
+++
+### Set the role assignment for your storage account
Before your managed application definition can be deployed to your storage account, assign the **Contributor** role to the **Appliance Resource Provider** user at the storage account scope. This assignment lets the identity write definition files to your storage account's container.
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+# [PowerShell](#tab/azure-powershell)
+
+In PowerShell, you can use variables for the role assignment. This example uses the `$storageId` you created in a previous step and creates the `$arpId` variable.
+
+```azurepowershell-interactive
+$arpId = (Get-AzADServicePrincipal -SearchString "Appliance Resource Provider").Id
+
+New-AzRoleAssignment -ObjectId $arpId `
+-RoleDefinitionName Contributor `
+-Scope $storageId
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+In Azure CLI, you need to use the string values to create the role assignment. This example gets string values from the `storageId` variable you created in a previous step and gets the object ID value for the Appliance Resource Provider. The command has placeholders for those values `arpGuid` and `storageId`. Replace the placeholders with the string values and use the quotes as shown.
+
+```azurecli-interactive
+echo $storageId
+az ad sp list --display-name "Appliance Resource Provider" --query [].id --output tsv
+
+az role assignment create --assignee "arpGuid" \
+--role "Contributor" \
+--scope "storageId"
+```
+
+If you're running CLI commands with Git Bash for Windows, you might get an `InvalidSchema` error because of the `scope` parameter's string. To fix the error, run `export MSYS_NO_PATHCONV=1` and then rerun your command to create the role assignment.
+++
+The **Appliance Resource Provider** is an Azure Enterprise application (service principal). Go to **Azure Active Directory** > **Enterprise applications** and change the search filter to **All Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
### Deploy the managed application definition with an ARM template
-Use the following ARM template to deploy your packaged managed application as a new managed application definition in Service Catalog whose definition files are stored and maintained in your own storage account:
+Use the following ARM template to deploy your packaged managed application as a new managed application definition in your service catalog. The definition files are stored and maintained in your storage account.
+
+Open Visual Studio Code, create a file with the name _azuredeploy.json_ and save it.
+
+Add the following JSON and save the file.
```json {
Use the following ARM template to deploy your packaged managed application as a
"applicationName": { "type": "string", "metadata": {
- "description": "Managed Application name"
- }
- },
- "storageAccountType": {
- "type": "string",
- "defaultValue": "Standard_LRS",
- "allowedValues": [
- "Standard_LRS",
- "Standard_GRS",
- "Standard_ZRS",
- "Premium_LRS"
- ],
- "metadata": {
- "description": "Storage Account type"
+ "description": "Managed Application name."
} }, "definitionStorageResourceID": { "type": "string", "metadata": {
- "description": "Storage account resource ID for where you're storing your definition"
+ "description": "Storage account's resource ID where you're storing your managed application definition."
} },
- "_artifactsLocation": {
+ "packageFileUri": {
"type": "string", "metadata": {
- "description": "The base URI where artifacts required by this template are located."
+ "description": "The URI where the .zip package file is located."
} } },
Use the following ARM template to deploy your packaged managed application as a
"description": "Sample Managed application definition", "displayName": "Sample Managed application definition", "managedApplicationDefinitionName": "[parameters('applicationName')]",
- "packageFileUri": "[parameters('_artifactsLocation')]",
- "defLocation": "[parameters('definitionStorageResourceID')]",
- "managedResourceGroupId": "[concat(subscription().id,'/resourceGroups/', concat(parameters('applicationName'),'_managed'))]",
- "applicationDefinitionResourceId": "[resourceId('Microsoft.Solutions/applicationDefinitions',variables('managedApplicationDefinitionName'))]"
+ "packageFileUri": "[parameters('packageFileUri')]",
+ "defLocation": "[parameters('definitionStorageResourceID')]"
}, "resources": [ { "type": "Microsoft.Solutions/applicationDefinitions",
- "apiVersion": "2020-08-21-preview",
+ "apiVersion": "2021-07-01",
"name": "[variables('managedApplicationDefinitionName')]", "location": "[parameters('location')]", "properties": {
Use the following ARM template to deploy your packaged managed application as a
} ```
-The `applicationDefinitions` properties include `storageAccountId` that contains the storage account ID for your storage account. You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`.
+For more information about the ARM template's properties, see [Microsoft.Solutions](/azure/templates/microsoft.solutions/applicationdefinitions).
+
+### Deploy the definition
+
+Create a resource group named _byosDefinitionRG_ and deploy the managed application definition to your storage account.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name byosDefinitionRG -Location eastus
+
+$storageId
+
+New-AzResourceGroupDeployment `
+ -ResourceGroupName byosDefinitionRG `
+ -TemplateFile .\azuredeploy.json
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az group create --name byosDefinitionRG --location eastus
+
+echo $storageId
+
+az deployment group create \
+ --resource-group byosDefinitionRG \
+ --template-file ./azuredeploy.json
+```
+++
+You'll be prompted for three parameters to deploy the definition.
+
+| Parameter | Value |
+| - | - |
+| `applicationName` | Choose a name for your managed application definition. For this example, use _sampleManagedAppDefintion_.|
+| `definitionStorageResourceID` | Enter your storage account's resource ID. You created the `storageId` variable with this value in an earlier step. Don't wrap the resource ID with quotes. |
+| `packageFileUri` | Enter the URI to your _.zip_ package file. Use the URI for the _.zip_ [package file](#package-the-files) you created in an earlier step. The format is `https://yourStorageAccountName.blob.core.windows.net/appcontainer/app.zip`. |
+
+### Verify definition files storage
+
+During deployment, the template's `storageAccountId` property uses your storage account's resource ID and creates a new container with the case-sensitive name `applicationdefinitions`. The files from the _.zip_ package you specified during the deployment are stored in the new container.
+
+You can use the following commands to verify that the managed application definition files are saved in your storage account's container. In the `Name` parameter, replace the placeholder `definitionstorage` with your unique storage account name.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+Get-AzStorageAccount -ResourceGroupName byosStorageRG -Name definitionstorage |
+Get-AzStorageContainer -Name applicationdefinitions |
+Get-AzStorageBlob | Select-Object -Property *
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az storage blob list \
+ --container-name applicationdefinitions \
+ --account-name definitionstorage \
+ --query "[].{container:container, name:name}"
+```
+
+When you run the Azure CLI command, you might see a warning message similar to the CLI command in [package the files](#package-the-files).
++ > [!NOTE]
-> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog.
+> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in your service catalog.
## Make sure users can see your definition
azure-resource-manager Networking Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/networking-move-limitations.md
Title: Move Azure Networking resources to new subscription or resource group description: Use Azure Resource Manager to move virtual networks and other networking resources to a new resource group or subscription. Previously updated : 08/15/2022 Last updated : 08/16/2022 # Move networking resources to new resource group or subscription
If you want to move networking resources to a new region, see [Tutorial: Move Az
## Dependent resources > [!NOTE]
-> Please note that any resource, including VPN Gateways, associated with Public IP Standard SKU addresses are not currently able to move across subscriptions.
+> Any resource, including a VPN Gateway, that is associated with a public IP Standard SKU address must be disassociated from the public IP address before moving across subscriptions.
When moving a resource, you must also move its dependent resources (for example - public IP addresses, virtual network gateways, all associated connection resources). Local network gateways can be in a different resource group.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 11/30/2021 Last updated : 08/15/2022
There are some important steps to do before moving a resource. By verifying thes
* [Transfer ownership of an Azure subscription to another account](../../cost-management-billing/manage/billing-subscription-transfer.md) * [How to associate or add an Azure subscription to Azure Active Directory](../../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md)
+1. If you're attempting to move resources to or from a Cloud Solution Provider (CSP) partner, see [Transfer Azure subscriptions between subscribers and CSPs](../../cost-management-billing/manage/transfer-subscriptions-subscribers-csp.md).
+ 1. The destination subscription must be registered for the resource provider of the resource being moved. If not, you receive an error stating that the **subscription is not registered for a resource type**. You might see this error when moving a resource to a new subscription, but that subscription has never been used with that resource type. For PowerShell, use the following commands to get the registration status:
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Last updated 08/15/2022
This article lists whether an Azure resource type supports the move operation. It also provides information about special conditions to consider when moving a resource.
+Before starting your move operation, review the [checklist](./move-resource-group-and-subscription.md#checklist-before-moving-resources) to make sure you have satisfied prerequisites.
+ > [!IMPORTANT] > In most cases, a child resource can't be moved independently from its parent resource. Child resources have a resource type in the format of `<resource-provider-namespace>/<parent-resource>/<child-resource>`. For example, `Microsoft.ServiceBus/namespaces/queues` is a child resource of `Microsoft.ServiceBus/namespaces`. When you move the parent resource, the child resource is automatically moved with it. If you don't see a child resource in this article, you can assume it is moved with the parent resource. If the parent resource doesn't support move, the child resource can't be moved.
Jump to a resource provider namespace:
## Microsoft.SaaS
+> [!IMPORTANT]
+> Marketplace offerings that are implemented through the Microsoft.Saas resource provider support resource group and subscription moves. These offerings are represented by the `resources` type below. For example, **SendGrid** is implemented through Microsoft.Saas and supports move operations. However, limitations defined in the [move requirements checklist](./move-resource-group-and-subscription.md#checklist-before-moving-resources) may limit the supported move scenarios. For example, you can't move the resources from a Cloud Solution Provider (CSP) partner.
+ > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
To get the same data as a file of comma-separated values, download [tag-support.
> | managedInstances | Yes | Yes | > | managedInstances / administrators | No | No | > | managedInstances / advancedThreatProtectionSettings | No | No |
-> | managedInstances / databases | Yes | Yes |
+> | managedInstances / databases | Yes | No |
> | managedInstances / databases / advancedThreatProtectionSettings | No | No | > | managedInstances / databases / backupLongTermRetentionPolicies | No | No | > | managedInstances / databases / vulnerabilityAssessments | No | No |
azure-resource-manager Template Tutorial Add Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-add-outputs.md
Title: Tutorial - add outputs to template description: Add outputs to your Azure Resource Manager template (ARM template) to simplify the syntax. Previously updated : 03/27/2020 Last updated : 08/17/2022
# Tutorial: Add outputs to your ARM template
-In this tutorial, you learn how to return a value from your Azure Resource Manager template (ARM template). You use outputs when you need a value from a deployed resource. This tutorial takes **7 minutes** to complete.
+In this tutorial, you learn how to return a value from your Azure Resource Manager template (ARM template). You use outputs when you need a value for a resource you deploy. This tutorial takes **7 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about variables](template-tutorial-add-variables.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
At the end of the previous tutorial, your template had the following JSON:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-variable/azuredeploy.json":::
-It deploys a storage account, but it doesn't return any information about the storage account. You might need to capture properties from a new resource so they're available later for reference.
+It deploys a storage account, but it doesn't return any information about it. You might need to capture properties from your new resource so they're available later for reference.
## Add outputs
-You can use outputs to return values from the template. For example, it might be helpful to get the endpoints for your new storage account.
+You can use outputs to return values from the template. It might be helpful, for example, to get the endpoints for your new storage account.
The following example highlights the change to your template to add an output value. Copy the whole file and replace your template with its contents.
There are some important items to note about the output value you added.
The type of returned value is set to `object`, which means it returns a JSON object.
-It uses the [reference](template-functions-resource.md#reference) function to get the runtime state of the storage account. To get the runtime state of a resource, you pass in the name or ID of a resource. In this case, you use the same variable you used to create the name of the storage account.
+It uses the [reference](template-functions-resource.md#reference) function to get the runtime state of the storage account. To get the runtime state of a resource, pass the name or ID of a resource. In this case, you use the same variable you used to create the name of the storage account.
Finally, it returns the `primaryEndpoints` property from the storage account.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
-In the output for the deployment command, you'll see an object similar to the following example only if the output is in JSON format:
+In the output for the deployment command, you see an object similar to the following example only if the output is in JSON format:
```json {
In the output for the deployment command, you'll see an object similar to the fo
``` > [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
## Review your work
-You've done a lot in the last six tutorials. Let's take a moment to review what you have done. You created a template with parameters that are easy to provide. The template is reusable in different environments because it allows for customization and dynamically creates needed values. It also returns information about the storage account that you could use in your script.
+You've done a lot in the last six tutorials. Let's take a moment to review what you've done. You created a template with parameters that are easy to provide. The template is reusable in different environments because it allows for customization and dynamically creates needed values. It also returns information about the storage account that you could use in your script.
Now, let's look at the resource group and deployment history.
Now, let's look at the resource group and deployment history.
If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-In this tutorial, you added a return value to the template. In the next tutorial, you'll learn how to export a template and use parts of that exported template in your template.
+In this tutorial, you added a return value to the template. In the next tutorial, you learn how to export a template and use parts of that exported template in your template.
> [!div class="nextstepaction"] > [Use exported template](template-tutorial-export-template.md)
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the regist
1. In the Create an Azure Video Indexer resource section, enter required values (the descriptions follow after the image). > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure Video Indexer resource." lightbox="./media/create-account-portal/avi-create-blade.png":::
+ > :::image type="content" source="./media/create-account-portal/avi-create-blade.png" alt-text="Screenshot showing how to create an Azure Video Indexer resource.":::
Here are the definitions:
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
With this capability, you:
## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. ## Configure Outbound Internet access using Managed SNAT in the Azure portal
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
With this capability, you have the following features:
## Reference architecture The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. ## Configure a Public IP in the Azure portal 1. Log on to the Azure portal.
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 06/08/2022 Last updated : 08/16/2022
Disk deduplication support is as follows:
- Disk deduplication is supported on-premises when you use DPM or MABS to back up Hyper-V VMs that are running Windows. Windows Server performs data deduplication (at the host level) on virtual hard disks (VHDs) that are attached to the VM as backup storage. - Deduplication isn't supported in Azure for any Backup component. When DPM and MABS are deployed in Azure, the storage disks attached to the VM can't be deduplicated.
+>[!Note]
+>Azure VM backup does not support Azure VM with deduplication. This means Azure Backup does not deduplicate backup data, except in MABS/MARS.
+ ## Security and encryption support Azure Backup supports encryption for in-transit and at-rest data.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
description: Learn how to deploy Bastion using settings that you specify - Azure
Previously updated : 08/03/2022 Last updated : 08/15/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
cdn Cdn Sas Storage Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-sas-storage-support.md
To use Azure CDN security token authentication, you must have an **Azure CDN Pre
``` $1&sv=2017-07-29&ss=b&srt=c&sp=r&se=2027-12-19T17:35:58Z&st=2017-12-19T09:35:58Z&spr=https&sig=kquaXsAuCLXomN7R00b8CYM13UpDbAHcsRfGOW3Du1M%3D ```
- ![CDN URL Rewrite rule - left](./media/cdn-sas-storage-support/cdn-url-rewrite-rule.png)
- ![CDN URL Rewrite rule - right](./media/cdn-sas-storage-support/cdn-url-rewrite-rule-option-4.png)
+ :::image type="content" source="./media/cdn-sas-storage-support/cdn-url-rewrite-rule.png" alt-text="Screenshot of CDN URL Rewrite rule - left.":::
+ :::image type="content" source="./media/cdn-sas-storage-support/cdn-url-rewrite-rule-option-3.png" alt-text="Screenshot of CDN URL Rewrite rule - right.":::
3. If you renew the SAS, ensure that you update the Url Rewrite rule with the new SAS token.
center-sap-solutions Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/install-software.md
You also can [upload the components manually](#upload-components-manually) inste
Before you can download the software, set up an Azure Storage account for the downloads.
-1. [Create an Ubuntu 20.04 VM in Azure](/cli/azure/install-azure-cli-linux?pivots=apt).
+1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure.
+
+1. Create a container within the Azure Storage account named `sapbits`.
+
+ 1. On the storage account's sidebar menu, select **Containers** under **Data storage**.
+
+ 1. Select **+ Container**.
+
+ 1. On the **New container** pane, for **Name**, enter `sapbits`.
+
+ 1. Select **Create**.
+
+1. Create an Ubuntu 20.04 VM in Azure
1. Sign in to the VM.
Before you can download the software, set up an Azure Storage account for the do
1. [Update the Azure CLI](/cli/azure/update-azure-cli) to version 2.30.0 or higher.
-1. Install the following packages:
-
- - `pip3` version `pip-21.3.1.tar.gz`
- - `wheel` version 0.37.1
- - `jq` version 1.6
- - `ansible` version 2.9.27
- - `netaddr` version 0.8.0
- - `zip`
- - `netaddr` version 0.8.0
1. Sign in to Azure:
Before you can download the software, set up an Azure Storage account for the do
az login ```
-1. [Create an Azure Storage account through the Azure portal](../storage/common/storage-account-create.md). Make sure to create the storage account in the same subscription as your SAP system infrastructure.
-
-1. Create a container within the Azure Storage account named `sapbits`.
-
- 1. On the storage account's sidebar menu, select **Containers** under **Data storage**.
-
- 1. Select **+ Container**.
-
- 1. On the **New container** pane, for **Name**, enter `sapbits`.
-
- 1. Select **Create**.
- 1. Download the following shell script for the deployer VM packages. ```azurecli
After setting up your Azure Storage account, you can download the SAP installati
1. Sign in to the Ubuntu VM that you created in the [previous section](#set-up-storage-account).
+1. Install ansible 2.9.27 on the ubuntu VM
+
+ ```bash
+ sudo pip3 install ansible==2.9.27
+ ```
+
1. Clone the SAP automation repository from GitHub. ```azurecli git clone https://github.com/Azure/sap-automation.git ```
-1. Generate a shared access signature (SAS) token for the `sapbits` container.
-
- 1. In the Azure portal, open the Azure Storage account.
-
- 1. Open the `sapbits` container.
-
- 1. On the container's sidebar menu, select **Shared access signature** under **Security + networking**.
-
- 1. On the SAS page, under **Allowed resource types**, select **Container**.
-
- 1. Configure other settings as necessary.
-
- 1. Select **Generate SAS and connection string**.
-
- 1. Copy the **SAS token** value. Make sure to copy the `?` prefix with the token.
- 1. Run the Ansible script **playbook_bom_download** with your own information. - For `<username>`, use your SAP username.
cloud-shell Cloud Shell Windows Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/cloud-shell-windows-users.md
Title: Azure Cloud Shell for Windows users | Microsoft Docs
description: Guide for users who are not familiar with Linux systems documentationcenter: ''-+ tags: azure-resource-manager
-
+
+ms.assetid:
vm-linux Previously updated : 08/03/2018- Last updated : 08/16/2022+ # PowerShell in Azure Cloud Shell for Windows users
PowerShell specific experiences, such as `tab-completing` cmdlet names, paramete
Some existing PowerShell aliases have the same names as built-in Linux commands, such as `cat`,`ls`, `sort`, `sleep`, etc. In PowerShell Core 6, aliases that collide with built-in Linux commands have been removed.
-Below are the common aliases that have been removed as well as their equivalent commands:
+Below are the common aliases that have been removed as well as their equivalent commands:
|Removed Alias |Equivalent Command | |||
mkdir (Split-Path $profile.CurrentUserAllHosts)
Under `$HOME/.config/PowerShell`, you can create your profile files - `profile.ps1` and/or `Microsoft.PowerShell_profile.ps1`.
-## What's new in PowerShell Core 6
+## What's new in PowerShell
-For more information about what is new in PowerShell Core 6, reference the [PowerShell docs](/powershell/scripting/whats-new/what-s-new-in-powershell-70) and the [Getting Started with PowerShell Core](https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/) blog post.
+For more information about what is new in PowerShell, reference the
+[PowerShell What's New](/powershell/scripting/whats-new/overview) and
+[Discover PowerShell](/powershell/scripting/discover-powershell).
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
Title: How to get facial pose events for lip-sync
+ Title: Get facial position with viseme
description: Speech SDK supports viseme events during speech synthesis, which represent key poses in observed speech, such as the position of the lips, jaw, and tongue when producing a particular phoneme.
The overall workflow of viseme is depicted in the following flowchart:
![Diagram of the overall workflow of viseme.](media/text-to-speech/viseme-structure.png)
-You can request viseme output in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup.md#viseme-element).
- ## Viseme ID Viseme ID refers to an integer number that specifies a viseme. We offer 22 different visemes, each depicting the mouth shape for a specific set of phonemes. There's no one-to-one correspondence between visemes and phonemes. Often, several phonemes correspond to a single viseme, because they look the same on the speaker's face when they're produced, such as `s` and `z`. For more specific information, see the table for [mapping phonemes to viseme IDs](#map-phonemes-to-visemes).
The blend shapes JSON string is represented as a 2-dimensional matrix. Each row
To get viseme with your synthesized speech, subscribe to the `VisemeReceived` event in the Speech SDK.
+> [!NOTE]
+> To request SVG or blend shapes output, you should use the `mstts:viseme` element in SSML. For details, see [how to use viseme element in SSML](speech-synthesis-markup.md#viseme-element).
+ The following snippet shows how to subscribe to the viseme event: ::: zone pivot="programming-language-csharp"
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The following neural voices are in public preview.
| Chinese (Mandarin, Simplified) | `zh-CN-sichuan` | Male | `zh-CN-sichuan-YunxiSichuanNeural` <sup>New</sup> | General, Sichuan accent | | English (United States) | `en-US` | Female | `en-US-JaneNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-NancyNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
+| English (United States) | `en-US` | Male | `en-US-AIGenerate1Neural` <sup>New</sup> | General|
| English (United States) | `en-US` | Male | `en-US-DavisNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-JasonNeural` <sup>New</sup> | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Male | `en-US-RogerNeural` <sup>New</sup> | General|
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
Previously updated : 02/24/2022 Last updated : 08/15/2022
The following table lists the base URLs for Azure sovereign cloud endpoints:
| Azure portal for US Government | `https://portal.azure.us` | | Azure portal China operated by 21 Vianet | `https://portal.azure.cn` |
+<!-- markdownlint-disable MD033 -->
+ ## Translator: sovereign clouds ### [Azure US Government](#tab/us)
The following table lists the base URLs for Azure sovereign cloud endpoints:
|Azure portal | <ul><li>[Azure Government Portal](https://portal.azure.us/)</li></ul>| | Available regions</br></br>The region-identifier is a required header when using Translator for the government cloud. | <ul><li>`usgovarizona` </li><li> `usgovvirginia`</li></ul>| |Available pricing tiers|<ul><li>Free (F0) and Standard (S0). See [Translator pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator/)</li></ul>|
-|Supported Features | <ul><li>Text Translation</li><li>Document Translation</li><li>Custom Translation</li></ul>|
+|Supported Features | <ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li><li>[Custom Translator](custom-translator/overview.md)</li></ul>|
|Supported Languages| <ul><li>[Translator language support](language-support.md)</li></ul>| <!-- markdownlint-disable MD036 -->
https://api.cognitive.microsofttranslator.us/
#### Document Translation custom endpoint
-Replace the `<your-custom-domain>` parameter with your [custom domain endpoint](document-translation/get-started-with-document-translation.md#what-is-the-custom-domain-endpoint).
- ```http
-https://<your-custom-domain>.cognitiveservices.azure.us/
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0
``` #### Custom Translator portal
The Azure China cloud is a physical and logical network-isolated instance of clo
||| |Azure portal |<ul><li>[Azure China 21 Vianet Portal](https://portal.azure.cn/)</li></ul>| |Regions <br></br>The region-identifier is a required header when using a multi-service resource. | <ul><li>`chinanorth` </li><li> `chinaeast2`</li></ul>|
-|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li></ul>|
+|Supported Feature|<ul><li>[Text Translation](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)</li><li>[Document Translation](document-translation/overview.md)</li></ul>|
|Supported Languages|<ul><li>[Translator language support.](https://docs.azure.cn/cognitive-services/translator/language-support)</li></ul>| <!-- markdownlint-disable MD036 -->
https://<region-identifier>.api.cognitive.azure.cn/sts/v1.0/issueToken
https://api.translator.azure.cn/translate ```
-### Example API translation request
+### Example text translation request
Translate a single sentence from English to Simplified Chinese.
curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en&
] ```
-> [!div class="nextstepaction"]
-> [Azure China: Translator Text reference](https://docs.azure.cn/cognitive-services/translator/reference/v3-0-reference)
+#### Document Translation custom endpoint
+
+#### Document Translation custom endpoint
+
+```http
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0
+```
+
+### Example batch translation request
+
+```json
+{
+ "inputs": [
+ {
+ "source": {
+ "sourceUrl": "https://my.blob.core.windows.net/source-en?sv=2019-12-12&st=2021-03-05T17%3A45%3A25Z&se=2021-03-13T17%3A45%3A00Z&sr=c&sp=rl&sig=SDRPMjE4nfrH3csmKLILkT%2Fv3e0Q6SWpssuuQl1NmfM%3D"
+ },
+ "targets": [
+ {
+ "targetUrl": "https://my.blob.core.windows.net/target-zh-Hans?sv=2019-12-12&st=2021-03-05T17%3A49%3A02Z&se=2021-03-13T17%3A49%3A00Z&sr=c&sp=wdl&sig=Sq%2BYdNbhgbq4hLT0o1UUOsTnQJFU590sWYo4BOhhQhs%3D",
+ "language": "zh-Hans"
+ }
+ ]
+ }
+ ]
+}
+```
-## Next step
+## Next steps
> [!div class="nextstepaction"] > [Learn more about Translator](index.yml)
cognitive-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-limited-access.md
# Limited Access features for Cognitive Services
-Our vision is to empower developers and organizations to leverage AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services.
+Our vision is to empower developers and organizations to use AI to transform society in positive ways. We encourage responsible AI practices to protect the rights and safety of individuals. To achieve this, Microsoft has implemented a Limited Access policy grounded in our [AI Principles](https://www.microsoft.com/ai/responsible-ai) to support responsible deployment of Azure services.
## What is Limited Access?
-Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they have reviewed and agree to the terms of service. Microsoft may require customers to re-verify this information.
+Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information.
-Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Please review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
+Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
## List of Limited Access services
The following services are Limited Access:
- [Computer Vision](/legal/cognitive-services/computer-vision/limited-access?context=/azure/cognitive-services/computer-vision/context/context): Celebrity Recognition feature - [Azure Video Indexer](../azure-video-indexer/limited-access-features.md): Celebrity Recognition and Face Identify features
-Features of these services that are not listed above are available without registration.
+Features of these services that aren't listed above are available without registration.
## FAQ about Limited Access
-### How do I apply for access?
+### How do I register for access?
-Please submit an intake form for each Limited Access service you would like to use:
+Submit a registration form for each Limited Access service you would like to use:
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
Please submit an intake form for each Limited Access service you would like to u
- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature - [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
-### How long will the application process take?
+### How long will the registration process take?
-Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
+Review may take 5-10 business days. You'll receive an email as soon as your registration form is reviewed.
### Who is eligible to use Limited Access services?
-Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their application.
+Limited Access services are available only to customers managed by Microsoft. Additionally, Limited Access services are only available for certain use cases, and customers must select their intended use case in their registration.
-Please use an email address affiliated with your organization in your application. Applications submitted with personal email addresses will be denied.
+Use an email address affiliated with your organization in your registration. Registration submitted with personal email addresses will be denied.
-If you are not a managed customer, we invite you to submit an application using the same forms and we will reach out to you about any opportunities to join an eligibility program.
+If you aren't a managed customer, we invite you to submit a registration using the same forms and we'll reach out to you about any opportunities to join an eligibility program.
-### What if I donΓÇÖt know whether IΓÇÖm a managed customer? What if I donΓÇÖt know my Microsoft contact or donΓÇÖt know if my organization has one?
+### What is a managed customer? What if I donΓÇÖt know whether IΓÇÖm a managed customer?
-We invite you to submit an intake form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access.
+Managed customers work with Microsoft account teams. We invite you to submit a registration form for the features youΓÇÖd like to use, and weΓÇÖll verify your eligibility for access. We are not able to accept requests to become a managed customer at this time.
-### What happens if IΓÇÖm an existing customer and I donΓÇÖt apply?
+### What happens if IΓÇÖm an existing customer and I donΓÇÖt register?
-Existing customers have until June 30, 2023 to submit an intake form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without an approved application, you will be denied access after June 30, 2023.
+Existing customers have until June 30, 2023 to submit a registration form and be approved to continue using Limited Access services after June 30, 2023. We recommend allowing 10 business days for review. Without approved registration, you'll be denied access after June 30, 2023.
-The intake forms can be found here:
+The registration forms can be found here:
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
The intake forms can be found here:
- [Computer Vision](https://aka.ms/facerecognition): Celebrity Recognition feature - [Azure Video Indexer](https://aka.ms/facerecognition): Celebrity Recognition and Face Identify features
-### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to apply to keep using these services?
+### IΓÇÖm an existing customer who applied for access to Custom Neural Voice or Speaker Recognition, do I have to register to keep using these services?
-WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you have previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new intake form to continue using these services beyond June 30, 2023.
+WeΓÇÖre always looking for opportunities to improve our Responsible AI program, and Limited Access is an update to our service gating processes. If you've previously applied for and been granted access to Custom Neural Voice or Speaker Recognition, we request that you submit a new registration form to continue using these services beyond June 30, 2023.
-If youΓÇÖre an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit an intake form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for application processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The intake forms can be found here:
+If you were an existing customer using Custom Neural Voice or Speaker Recognition on June 21, 2022, you have until June 30, 2023 to submit a registration form with your selected use case and receive approval to continue using these services after June 30, 2023. We recommend allowing 10 days for registration processing. Existing customers can continue using the service until June 30, 2023, after which they must be approved for access. The registration forms can be found here:
- [Custom Neural Voice](https://aka.ms/customneural): Pro features - [Speaker Recognition](https://aka.ms/azure-speaker-recognition): All features
-### What if my use case is not on the intake form?
+### What if my use case isn't on the registration form?
-Limited Access features are only available for the use cases listed on the intake forms. If your desired use case is not listed, please let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings.
+Limited Access features are only available for the use cases listed on the registration forms. If your desired use case isn't listed, let us know in this [feedback form](https://aka.ms/CogSvcsLimitedAccessFeedback) so we can improve our service offerings.
### Where can I use Limited Access services?
Search [here](https://azure.microsoft.com/global-infrastructure/services/) for a
Detailed information about supported regions for Custom Neural Voice and Speaker Recognition operations can be found [here](./speech-service/regions.md).
-### What happens to my data if my application is denied?
+### What happens to my data if my registration is denied?
-If you are an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
+If you are an existing customer and your registration for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to MicrosoftΓÇÖs data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
## Help and support
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 07/11/2022 Last updated : 08/15/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Previously updated : 07/11/2022 Last updated : 08/15/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 07/11/2022 Last updated : 08/15/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
# What is document and conversation summarization (preview)? + Summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications. # [Document summarization](#tab/document-summarization)
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 07/11/2022 Last updated : 08/15/2022 ms.devlang: csharp, java, javascript, python
If you want to clean up and remove a Cognitive Services subscription, you can de
> [!div class="nextstepaction"] > <a href="https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=REST API&Pillar=Language&Product=Summarization&Page=quickstart&Section=Clean-up-resources" target="_target">I ran into an issue</a> - ## Next steps * [How to call document summarization](./how-to/document-summarization.md)
-* [How to call conversation summarization](./how-to/conversation-summarization.md)
+* [How to call conversation summarization](./how-to/conversation-summarization.md)
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs. ## Client capabilities
-The following table shows supported client-side capabilities available in Azure Communication Services SDKs:
-
-| Capability | Supported |
-| | |
-| Send and receive chat messages | ✔️ |
-| Use typing indicators | ✔️ |
-| Read receipt | ❌ |
-| File sharing | ❌ |
-| Reply to chat message | ❌ |
-| React to chat message | ❌ |
-| Audio and video calling | ✔️ |
-| Share screen and see shared screen | ✔️ |
-| Manage Teams convenient recording | ❌ |
-| Manage Teams transcription | ❌ |
-| Receive closed captions | ❌ |
-| Add and remove meeting participants | ❌ |
-| Raise and lower hand | ❌ |
-| See raised and lowered hand | ❌ |
-| See and set reactions | ❌ |
-| Control Teams third-party applications | ❌ |
-| Interact with a poll or Q&A | ❌ |
-| Set and unset spotlight | ❌ |
-| See PowerPoint Live | ❌ |
-| See Whiteboard | ❌ |
-| Participation in breakout rooms | ❌ |
-| Apply background effects | ❌ |
-| See together mode video stream | ❌ |
-
-When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
+The following table shows supported client-side capabilities available in Azure Communication Services SDKs. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md).
+
+| Category | Capability | Supported |
+| | | |
+|Chat | Send and receive chat messages | ✔️ |
+| | Send and receive Giphy | ❌ |
+| | Send messages with high priority | ❌ |
+| | Recieve messages with high priority | ✔️ |
+| | Send and receive Loop components | ❌ |
+| | Send and receive Emojis | ❌ |
+| | Send and receive Stickers | ❌ |
+| | Send and receive Stickers | ❌ |
+| | Send and receive Teams messaging extensions | ❌ |
+| | Use typing indicators | ✔️ |
+| | Read receipt | ❌ |
+| | File sharing | ❌ |
+| | Reply to chat message | ❌ |
+| | React to chat message | ❌ |
+|Calling - core | Audio send and receive | ✔️ |
+| | Send and receive video | ✔️ |
+| | Share screen and see shared screen | ✔️ |
+| | Manage Teams convenient recording | ❌ |
+| | Manage Teams transcription | ❌ |
+| | Manage breakout rooms | ❌ |
+| | Participation in breakout rooms | ❌ |
+| | Leave meeting | ✔️ |
+| | End meeting | ❌ |
+| | Change meeting options | ❌ |
+| | Lock meeting | ❌ |
+| Calling - participants| See roster | ✔️ |
+| | Add and remove meeting participants | ❌ |
+| | Dial out to phone number | ❌ |
+| | Disable mic or camera of others | ❌ |
+| | Make a participant and attendee or presenter | ❌ |
+| | Admit or reject participants in the lobby | ❌ |
+| Calling - engagement | Raise and lower hand | ❌ |
+| | See raised and lowered hand | ❌ |
+| | See and set reactions | ❌ |
+| Calling - video streams | Send and receive video | ✔️ |
+| | See together mode video stream | ❌ |
+| | See Large gallery view | ❌ |
+| | See Video stream from Teams media bot | ❌ |
+| | See adjusted content from Camera | ❌ |
+| | Set and unset spotlight | ❌ |
+| | Apply background effects | ❌ |
+| Calling - integrations | Control Teams third-party applications | ❌ |
+| | See PowerPoint Live stream | ❌ |
+| | See Whiteboard stream | ❌ |
+| | Interact with a poll | ❌ |
+| | Interact with a Q&A | ❌ |
+| | Interact with a OneNote | ❌ |
+| | Manage SpeakerCoach | ❌ |
+| Accessibility | Receive closed captions | ❌ |
+| | Communication access real-time translation (CART) | ❌ |
+| | Language interpretation | ❌ |
+
+When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
## Server capabilities
The following table shows supported Teams capabilities:
| | | | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |-
+| [Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |
## Next steps
The following table shows supported Teams capabilities:
- [Join Teams meeting chat as Teams external user](../../../quickstarts/chat/meeting-interop.md) - [Join meeting options](../../../how-tos/calling-sdk/teams-interoperability.md) - [Communicate as Teams user](../../teams-endpoint.md).-
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/overview.md
You can create an identity and access token for Teams external users on Azure po
With a valid identity, access token, and Teams meeting URL, you can use [Azure Communication Services UI Library](https://azure.github.io/communication-ui-library/?path=/story/composites-call-with-chat-jointeamsmeeting--join-teams-meeting) to join Teams meeting without any code.
+>[!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
+ ### Single-click deployment The [Azure Communication Services Calling Hero Sample](../../../samples/calling-hero-sample.md) demonstrates how developers can use Azure Communication Services Calling Web SDK to join a Teams meeting from a web application as a Teams external user. You can experiment with the capability with single-click deployment to Azure.
communication-services Teams Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-administration.md
-# Teams administrator controls
+# Teams controls
+Teams administrators control organization-wide policies and manage and assign user policies. Teams meeting policies are tied to the organizer of the Teams meeting. Teams meetings also have options to customize specific Teams meetings further.
+
+## Teams policies
Teams administrators have the following policies to control the experience for Teams external users in Teams meetings. |Setting name|Policy scope|Description| Supported |
Teams administrators have the following policies to control the experience for T
Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings. Use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
+## Teams meeting options
+
+Teams meeting organizers can also configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for external users:
+
+|Option name|Description| Supported |
+| | | |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | If set to "Everyone", Teams external users can bypass lobby. Otherwise, Teams external users have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
+| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
+| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Not applicable to external users | ✔️ |
+| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
+|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local audio |✔️|
+|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If external user is attendee, then this option controls whether external user can send local video |✔️|
+|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
+|Allow meeting chat|If enabled, external users can use the chat associated with the Teams meeting.|✔️|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, external users can use reactions in the Teams meeting |❌|
+|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
+|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
++ ## Next steps - [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
communication-services Teams Client Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/teams-client-experience.md
# Experience for users in Teams client Teams external user joining Teams meeting with Azure Communication Services SDKs will be represented in Teams client as any other Teams anonymous user. Teams external users will be marked as "external" in the participant's lists as Teams clients. As Teams anonymous users, their capabilities in the Teams meeting will be limited regardless of the assigned Teams meeting role. +
+![A diagram that shows how external user on Azure Communication Services connects to Teams meeting.](../media/desktop-client-external-user-joins-teams-meeting.png)
+ ## Next steps - [Authenticate as Teams external user](../../../quickstarts/access-tokens.md)
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Title: Azure Communication Services Teams identity overview
description: Provides an overview of the support for Teams identity in Azure Communication Services Calling SDK. -+
Key features of the Calling SDK:
- **Addressing** - Azure Communication Services is using [Azure Active Directory user identifier](/powershell/module/azuread/get-azureaduser) to address communication endpoints. Clients use Azure Active Directory identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provide clients visibility into who is connected to a call (the roster). And are also used in [Microsoft Graph API](/graph/api/user-get). - **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire. - **Device Management and Media** - The Calling SDK provides facilities for binding to audio and video devices, encodes content for efficient transmission over the communications data plane, and renders content to output devices and views that you specify. APIs are also provided for screen and application sharing.-- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publicly switched telephony system, [using phone numbers you acquire in the Teams Admin Portal](/microsoftteams/pstn-connectivity).
+- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publicly switched telephony system [using phone numbers you acquire in the Teams Admin Portal](/microsoftteams/pstn-connectivity).
- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video data plane. -- **Notifications** - The Calling SDK provides APIs allowing clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call.
+- **Notifications** - The Calling SDK provides APIs that allow clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call.
## Detailed Azure Communication Services capabilities
-The following list presents the set of features, which are currently available in the Azure Communication Services Calling SDK for JavaScript.
+The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript.
| Group of features | Capability | JavaScript | | -- | - | - |
The following list presents the set of features, which are currently available i
| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | | | Show if a participant is muted | ✔️ | | | Show the reason why a participant left a call | ✔️ |
-| | Admit participant in the Lobby into the Teams meeting | ❌ |
+| | Admit participant in the lobby into the Teams meeting | ❌ |
| Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |
The following list presents the set of features, which are currently available i
| | Place a group call with PSTN participants | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ |
-| | Suppport for early media | ❌ |
+| | Support for early media | ❌ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ |
The following list presents the set of Teams capabilities, which are currently a
| | Transfer a call to a call | ✔️ | | | Transfer a call to Voicemail | ❌ | | | Merge ongoing calls | ❌ |
-| | Place a call on behalf of user | ❌ |
+| | Place a call on behalf of the user | ❌ |
| | Start call recording | ❌ | | | Start call transcription | ❌ | | | Start live captions | ❌ | | | Receive information of call being recorded | ✔️ |
-| PSTN | Make an Emergency call | ❌ |
+| PSTN | Make an Emergency call | ✔️ |
| | Place a call honors location-based routing | ❌ | | | Support for survivable branch appliance | ❌ | | Phone system | Receive a call from Teams auto attendant | ✔️ |
The following list presents the set of Teams capabilities, which are currently a
| | Transfer a call from Teams call queue (only conference mode) | ✔️ | | Compliance | Place a call honors information barriers | ✔️ | | | Support for compliance recording | ✔️ |
+| Meeting | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ |
++
+## Teams meeting options
+
+Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users:
+
+|Option name|Description| Supported |
+| | | |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
+| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
+| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
+| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
+|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
+|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
+|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
+|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Sevices don't support reactions. |❌|
+|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
+|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
+ ## Next steps
communication-services Teams Client Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/teams-client-experience.md
+
+ Title: Teams client experience for Teams user
+
+description: Teams client experience of Azure Communication Services support for Teams users
++ Last updated : 7/9/2022+++++
+# Experience for users in Teams client interacting with Teams users
+Teams users calling users in the same organization or joining Teams meetings organized in the same organization will be represented in Teams client as any other Teams user. Teams users calling users in trusted organizations or joining Teams meetings organized in trusted organizations will be represented in Teams clients as Teams users from different organizations. Teams users from the other organizations will be marked as "external" in the participant's lists as Teams clients. As Teams users from a trusted organization, their capabilities in the Teams meetings will be limited regardless of the assigned Teams meeting role.
+
+## Joining meetings within the organization
+The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from the same organization using Azure Communication Services SDK who joined Teams meeting.
+![A diagram that shows how a Teams user on Azure Communication Services connects to a Teams meeting organized by the same organization.](../media/desktop-client-teams-user-joins-teams-meeting.png)
+
+## Joining meetings outside of the organization
+The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from a different organization using Azure Communication Services SDK who joined Teams meeting.
+![A diagram that shows how Teams user on Azure Communication Services connects to Teams meetings organized by a different organization.](../media/desktop-client-external-user-joins-teams-meeting.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
connectors Connectors Create Api Crmonline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-crmonline.md
Title: Connect to Dynamics 365
-description: Create and manage Dynamics 365 in workflows using Azure Logic Apps.
+ Title: Connect to Dynamics 365 (Deprecated)
+description: Connect to your Dynamics 365 database from workflows in Azure Logic Apps.
ms.suite: integration
Last updated 08/05/2022
tags: connectors
-# Connect to Dynamics 365 from workflows in Azure Logic Apps
+# Connect to Dynamics 365 from workflows in Azure Logic Apps (Deprecated)
> [!IMPORTANT] > The Dynamics 365 connector is officially deprecated and is no longer available. Instead, use the
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
Title: Connect to FTP servers
-description: Connect to an FTP server from workflows in Azure Logic Apps.
+description: Connect to your FTP server from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 07/24/2022 Last updated : 08/15/2022 tags: connectors # Connect to an FTP server from workflows in Azure Logic Apps
-This article shows how to access your FTP server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server.
+This article shows how to access your File Transfer Protocol (FTP) server from a workflow in Azure Logic Apps with the FTP connector. You can then create automated workflows that run when triggered by events in your FTP server or in other systems and run actions to manage files on your FTP server.
For example, your workflow can start with an FTP trigger that monitors and responds to events on your FTP server. The trigger makes the outputs available to subsequent actions in your workflow. Your workflow can run FTP actions that create, send, receive, and manage files through your FTP server account using the following specific tasks:
The FTP connector has different versions, based on [logic app type and host envi
By default, FTP actions can read or write files that are *200 MB or smaller*. Currently, the FTP built-in connector doesn't support chunking.
- * Managed connector for Consumption and Standard workflows
+ * Managed or Azure-hosted connector for Consumption and Standard workflows
By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking.
-* FTP managed connector triggers might experience missing, incomplete, or delayed results when the "last modified" timestamp is preserved. On the other hand, the FTP *built-in* connector trigger in Standard logic app workflows doesn't have this limitation. For more information, review the FTP connector's [Limitations](/connectors/ftp/#limitations) section.
+* Triggers for the FTP managed or Azure-hosted connector might experience missing, incomplete, or delayed results when the "last modified" timestamp is preserved. On the other hand, the FTP *built-in* connector trigger in Standard logic app workflows doesn't have this limitation. For more information, review the FTP connector's [Limitations](/connectors/ftp/#limitations) section.
+
+* The FTP managed or Azure-hosted connector can create a limited number of connections to the FTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the FTP built-in connector instead.
## Prerequisites
connectors Connectors Create Api Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sftp.md
Title: Connect to SFTP account (Deprecated)
-description: Automate tasks and processes that monitor, create, manage, send, and receive files for an SFTP server using Azure Logic Apps.
+ Title: Connect to SFTP (Deprecated)
+description: Connect to an SFTP server from workflows in Azure Logic Apps.
ms.suite: integration
tags: connectors
-# Monitor, create, and manage SFTP files in Azure Logic Apps
+# Connect to SFTP from workflows in Azure Logic Apps (Deprecated)
> [!IMPORTANT] > Please use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md) as the SFTP connector is deprecated. You can no longer select SFTP
To automate tasks that monitor, create, send, and receive files on a [Secure Fil
You can use triggers that monitor events on your SFTP server and make output available to other actions. You can use actions that perform various tasks on your SFTP server. You can also have other actions in your logic app use the output from SFTP actions. For example, if you regularly retrieve files from your SFTP server, you can send email alerts about those files and their content by using the Office 365 Outlook connector or Outlook.com connector. If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-## Limits
+## Limitations
The SFTP connector handles only files that are *50 MB or smaller* and doesn't support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). For larger files, use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md). For differences between the SFTP connector and the SFTP-SSH connector, review [Compare SFTP-SSH versus SFTP](../connectors/connectors-sftp-ssh.md#comparison) in the SFTP-SSH article. +
+ * The SFTP-SSH managed or Azure-hosted connector for Consumption and Standard workflows handles only files that are *50 MB or smaller* and doesn't support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). For larger files, use the [SFTP-SSH connector](../connectors/connectors-sftp-ssh.md). For differences between the SFTP connector and the SFTP-SSH connector, review [Compare SFTP-SSH versus SFTP](../connectors/connectors-sftp-ssh.md#comparison) in the SFTP-SSH article.
+
+ By default, FTP actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB, FTP actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The **Get file content** action implicitly uses chunking.
+
+* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead.
+ ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
connectors Connectors Create Api Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-twilio.md
- Title: Connect to Twilio with Azure Logic Apps
-description: Automate tasks and workflows that manage global SMS, MMS, and IP messages through your Twilio account using Azure Logic Apps.
--- Previously updated : 08/25/2018
-tags: connectors
--
-# Connect to Twilio from Azure Logic Apps
-
-With Azure Logic Apps and the Twilio connector,
-you can create automated tasks and workflows
-that get, send, and list messages in Twilio,
-which include global SMS, MMS, and IP messages.
-You can use these actions to perform tasks with
-your Twilio account. You can also have other actions
-use the output from Twilio actions. For example,
-when a new message arrives, you can send the message
-content with the Slack connector. If you're new to logic apps,
-review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md)
-
-## Prerequisites
-
-* An Azure account and subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-
-* From [Twilio](https://www.twilio.com/):
-
- * Your Twilio account ID and
- [authentication token](https://support.twilio.com/hc/en-us/articles/223136027-Auth-Tokens-and-How-to-Change-Them),
- which you can find on your Twilio dashboard
-
- Your credentials authorize your logic app to create a
- connection and access your Twilio account from your logic app.
- If you're using a Twilio trial account,
- you can send SMS only to *verified* phone numbers.
-
- * A verified Twilio phone number that can send SMS
-
- * A verified Twilio phone number that can receive SMS
-
-* Basic knowledge about
-[how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-
-* The logic app where you want to access your Twilio account.
-To use a Twilio action, start your logic app with another trigger,
-for example, the **Recurrence** trigger.
-
-## Connect to Twilio
--
-1. Sign in to the [Azure portal](https://portal.azure.com),
-and open your logic app in Logic App Designer, if not open already.
-
-1. Choose a path:
-
- * Under the last step where you want to add an action,
- choose **New step**.
-
- -or-
-
- * Between the steps where you want to add an action,
- move your pointer over the arrow between steps.
- Choose the plus sign (**+**) that appears,
- and then select **Add an action**.
-
- In the search box, enter "twilio" as your filter.
- Under the actions list, select the action you want.
-
-1. Provide the necessary details for your connection,
-and then choose **Create**:
-
- * The name to use for your connection
- * Your Twilio account ID
- * Your Twilio access (authentication) token
-
-1. Provide the necessary details for your selected action
-and continue building your logic app's workflow.
-
-## Connector reference
-
-For technical details about triggers, actions, and limits, which are
-described by the connector's OpenAPI (formerly Swagger) description,
-review the connector's [reference page](/connectors/twilio/).
-
-## Get support
-
-* For questions, visit the [Microsoft Q&A question page for Azure Logic Apps](/answers/topics/azure-logic-apps.html).
-* To submit or vote on feature ideas, visit the [Logic Apps user feedback site](https://aka.ms/logicapps-wish).
-
-## Next steps
-
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
connectors Connectors Schema Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-schema-migration.md
- Title: Migrate apps to latest schema
-description: How to migrate logic app workflow JSON definitions to the most recent Workflow Definition Language schema version
--- Previously updated : 08/25/2018--
-# Migrate logic apps to latest schema version
-
-To move your existing logic apps to the newest schema,
-follow these steps:
-
-1. In the [Azure portal](https://portal.azure.com),
-open your logic app in the Logic App Designer.
-
-2. On your logic app's menu, choose **Overview**.
-On the toolbar, choose **Update Schema**.
-
- > [!NOTE]
- > When you choose **Update Schema**, Azure Logic Apps
- > automatically runs the migration steps and provides
- > the code output for you. You can use this output for
- > updating your logic app definition. However, make
- > sure you follow best practices as described in the
- > following **Best practices** section.
-
- ![Update Schema](./media/connectors-schema-migration/update-schema.png)
-
- The Update Schema page appears and shows
- a link to a document that describes the
- improvements in the new schema.
-
-## Best practices
-
-Here are some best practices for migrating your
-logic apps to the latest schema version:
-
-* Copy the migrated script to a new logic app.
-Don't overwrite the old version until you complete
-your testing and confirm that your migrated app works as expected.
-
-* Test your logic app **before** putting in production.
-
-* After you finish migration, start updating your logic
-apps to use the [managed APIs](../connectors/apis-list.md)
-where possible. For example, start using Dropbox v2
-everywhere that you use DropBox v1.
-
-## Next steps
-
-* Learn how to [manually migrate your Logic apps](../logic-apps/logic-apps-schema-2016-04-01.md)
-
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
Title: Connect to SFTP server with SSH
-description: Automate tasks that monitor, create, manage, send, and receive files for an SFTP server by using SSH and Azure Logic Apps.
+ Title: Connect to SFTP using SSH from workflows
+description: Connect to your SFTP file server over SSH from workflows in Azure Logic Apps.
ms.suite: integration Previously updated : 05/06/2022 Last updated : 08/16/2022 tags: connectors
-# Create and manage SFTP files using SSH and Azure Logic Apps
+# Connect to an SFTP file server using SSH from workflows in Azure Logic Apps
To automate tasks that create and manage files on a [Secure File Transfer Protocol (SFTP)](https://www.ssh.com/ssh/sftp/) server using the [Secure Shell (SSH)](https://www.ssh.com/ssh/protocol/) protocol, you can create automated integration workflows by using Azure Logic Apps and the SFTP-SSH connector. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
In your workflow, you can use a trigger that monitors events on your SFTP server
For differences between the SFTP-SSH connector and the SFTP connector, review the [Compare SFTP-SSH versus SFTP](#comparison) section later in this topic.
-## Limits
+## Limitations
* The SFTP-SSH connector currently doesn't support these SFTP servers:
For differences between the SFTP-SSH connector and the SFTP connector, review th
1. Follow the trigger with the SFTP-SSH **Get file content** action. This action reads the complete file and implicitly uses message chunking.
+* The SFTP-SSH managed or Azure-hosted connector can create a limited number of connections to the SFTP server, based on the connection capacity in the Azure region where your logic app resource exists. If this limit poses a problem in a Consumption logic app workflow, consider creating a Standard logic app workflow and use the SFTP-SSH built-in connector instead.
+ <a name="comparison"></a> ## Compare SFTP-SSH versus SFTP
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
cosmos-db Hierarchical Partition Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/hierarchical-partition-keys.md
In a real world scenario, some tenants can grow large with thousands of users, w
Using a synthetic partition key that combines **TenantId** and **UserId** adds complexity to the application. Additionally, the synthetic partition key queries for a tenant will still be cross-partition, unless all users are known and specified in advance.
-With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data on the will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions.
+With hierarchical partition keys, we can partition first on **TenantId**, and then **UserId**. We can even partition further down to another level, such as **SessionId**, as long as the overall depth doesn't exceed three levels. When a physical partition exceeds 50 GB of storage, Cosmos DB will automatically split the physical partition so that roughly half of the data will be on one physical partition, and half on the other. Effectively, subpartitioning means that a single TenantId can exceed 20 GB of data, and it's possible for a TenantId's data to span multiple physical partitions.
Queries that specify either the **TenantId**, or both **TenantId** and **UserId** will be efficiently routed to only the subset of physical partitions that contain the relevant data. Specifying the full or prefix subpartitioned partition key path effectively avoids a full fan-out query. For example, if the container had 1000 physical partitions, but a particular **TenantId** was only on five of them, the query would only be routed to the much smaller number of relevant physical partitions.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
cost-management-billing Manage Billing Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-across-tenants.md
tags: billing
Previously updated : 08/04/2022 Last updated : 08/15/2022
If the Provisioning access setting is turned on, a unique link is created for yo
Before assigning roles, make sure you [add a tenant as an associated billing tenant and enable billing management access setting](#add-an-associated-billing-tenant).
+> [!IMPORTANT]
+> Any user with a role in the billing account can see all users from all tenants who have access to that billing account. For example, Contoso.com is the primary billing tenant. A billing account owner adds Fabrikam.com as an associated billing tenant. Then, the billing account owner adds User1 as a billing account owner. As a result, User1 can see all users who have access to the billing account on both Contoso.com and Fabrikam.com.
+ ### To assign roles and send an email invitation 1. Sign in to the [Azure portal](https://portal.azure.com).
cost-management-billing Calculate Ea Reservations Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/calculate-ea-reservations-savings.md
+
+ Title: Calculate EA reservations cost savings
+
+description: Learn how Enterprise Agreement users manually calculate their reservations savings.
+++++ Last updated : 08/15/2022+++
+# Calculate EA reservations cost savings
+
+This article helps Enterprise Agreement users manually calculate their reservations savings. In this article, you download your amortized usage and charges file, prepare an Excel worksheet, and then do some calculations to determine your savings. There are several steps involved and we'll walk you through the process.
+
+> [!NOTE]
+> The prices shown in this article are for example purposes only.
+
+Although the example process shown in this article uses Excel, you can use the spreadsheet application of your choice.
+
+This article is specific to EA users. Microsoft Customer Agreement (MCA) users can use similar steps to calculate their reservation savings through invoices. However, the MCA amortized usage file doesn't contain UnitPrice (on-demand pricing) for reservations. Other resources in the file do. For more information, see [Download usage for your Microsoft Customer Agreement](understand-reserved-instance-usage-ea.md#download-usage-for-your-microsoft-customer-agreement).
+
+## Required permissions
+
+To view and download usage data as an EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled.
+
+## Download all usage amortized charges
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Search for _Cost Management + Billing_.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/search-cost-management.png" alt-text="Screenshot showing search for cost management." lightbox="./media/calculate-ea-reservations-savings/search-cost-management.png" :::
+3. If you have access to multiple billing accounts, select the billing scope for your EA billing account.
+4. Select **Usage + charges**.
+5. For the month you want to download, select **Download**.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/download-usage-ea.png" alt-text="Screenshot showing Usage + charges download." lightbox="./media/calculate-ea-reservations-savings/download-usage-ea.png" :::
+6. On the Download Usage + Charges page, under Usage Details, select **Amortized charges (usage and purchases)**.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/select-usage-detail-charge-type-small.png" alt-text="Screenshot showing the Download usage + charges window." lightbox="./media/calculate-ea-reservations-savings/select-usage-detail-charge-type.png" :::
+7. Select **Prepare document**.
+8. It could take a while for Azure to prepare your download, depending on your monthly usage. When it's ready for download, select **Download csv**.
+
+## Prepare data and calculate savings
+
+Because Azure usage files are in CSV format, you need to prepare the data for use in Excel. Then you calculate your savings.
+
+1. Open the amortized cost file in Excel and save it as an Excel workbook.
+2. The data resembles the following example.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/unformatted-data.png" alt-text="Example screenshot of the unformatted amortized usage file." lightbox="./media/calculate-ea-reservations-savings/unformatted-data.png" :::
+3. In the Home ribbon, select **Format as Table**.
+4. In the Create Table window, select **My table has headers**.
+5. In the ReservationName column, set a filter to clear **Blanks**.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/reservation-name-clear-blanks-small.png" alt-text="Screenshot showing clear Blanks data." lightbox="./media/calculate-ea-reservations-savings/reservation-name-clear-blanks.png" :::
+6. Find the ChargeType column and then to the right of the column name, select the sort and filter symbol (the down arrow).
+7. For the **ChargeType** column, set a filter on it to select only **Usage**. Clear any other selections.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/charge-type-selection-small.png" alt-text="Screenshot showing ChargeType selection." lightbox="./media/calculate-ea-reservations-savings/charge-type-selection.png" :::
+8. To the right of **UnitPrice** , insert add a column and label it with a title like **TotalUsedSavings**.
+9. In the first cell under TotalUsedSavings, create a formula that calculates (_UnitPrice ΓÇô EffectivePrice) \* Quantity_.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/total-used-savings-formula.png" alt-text="Screenshot showing the TotalUsedSavings formula." lightbox="./media/calculate-ea-reservations-savings/total-used-savings-formula.png" :::
+10. Copy the formula to all the other empty TotalUsedSavings cells.
+11. At the bottom of the TotalUsedSavings column, sum the column's values.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/total-used-savings-summed.png" alt-text="Screenshot showing the summed values." lightbox="./media/calculate-ea-reservations-savings/total-used-savings-summed.png" :::
+12. Somewhere under your data, create a cell named _TotalUsedSavingsValue_. Next to it, copy the TotalUsed cell and paste it as **Values**. This step is important because the next step will change the applied filter and affect the summed total.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/paste-value-used.png" alt-text="Screenshot showing pasting the TotalUsedSavings cell as Values." lightbox="./media/calculate-ea-reservations-savings/paste-value-used.png" :::
+13. For the **ChargeType** column, set a filter on it to select only **UnusedReservation**. Clear any other selections.
+14. To the right of the TotalUsedSavings column, insert a column and label it with a title like **TotalUnused**.
+15. In the first cell under TotalUnused, create a formula that calculates _EffectivePrice \* Quantity_.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/total-unused-formula.png" alt-text="Screenshot showing the TotalUnused formula." lightbox="./media/calculate-ea-reservations-savings/total-unused-formula.png" :::
+16. At the bottom of the TotalUnused column, sum the column's values.
+17. Somewhere under your data, create a cell named _TotalUnusedValue_. Next to it, copy the TotalUnused cell and paste it as **Values**.
+18. Under the TotalUsedSavingsValue and TotalUnusedValue cells, create a cell named _ReservationSavings_. Next to it, subtract TotalUnusedValue from TotalUsedSavingsValue. The calculation result is your reservation savings.
+ :::image type="content" source="./media/calculate-ea-reservations-savings/reservation-savings.png" alt-text="Screenshot showing the ReservationSavings calculation and final savings." lightbox="./media/calculate-ea-reservations-savings/reservation-savings.png" :::
+
+If you see a negative savings value, then you likely have many unused reservations. You should review your reservation usage to maximize them. For more information, see [Optimize reservation use](manage-reserved-vm-instance.md#optimize-reservation-use).
+
+## Other ways to get data and see savings
+
+Using the preceding steps, you can repeat the process for any number of months. Doing so allows you to see your savings over a longer period.
+
+Instead of manually calculating your savings, you can see the same savings by viewing the RI savings report in the [Cost Management Power BI App for Enterprise Agreements](../costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md). The Power BI app automatically connects to your Azure data and performs the savings calculations automatically. The report shows savings for the period you have set, so it can span multiple months.
+
+Instead of downloading usage files, one per month, you can get all your usage data for a specific date range using exports from Cost Management and output the data to Azure Storage. Doing so allows you to see your savings over a longer period. For more information about creating an export, see [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+
+## Next steps
+
+- If you have any unused reservations, read [Optimize reservation use](manage-reserved-vm-instance.md#optimize-reservation-use).
+- Learn more about creating an export at [Create and manage exported data](../costs/tutorial-export-acm-data.md).
+- Read about the RI savings report in the [Cost Management Power BI App for Enterprise Agreements](../costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md).
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
Previously updated : 07/04/2022 Last updated : 08/15/2022 # Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
The following properties are supported for Data Lake Storage Gen2 under `storeSe
| | | -- | | type | The type property under `storeSettings` must be set to **AzureBlobFSWriteSettings**. | Yes | | copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
-| blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most 4.95-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No |
+| blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No |
| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No | | metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
data-factory Connector Troubleshoot Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-cosmos-db.md
Previously updated : 10/01/2021 Last updated : 07/29/2022
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: To check the error details, see [Azure Cosmos DB help document](../cosmos-db/troubleshoot-dot-net-sdk.md). For further help, contact the Azure Cosmos DB team.
+## Error code: CosmosDbSqlApiPartitionKeyExceedStorage
+
+- **Message**: `The size of data each logical partition can store is limited, current partitioning design and workload failed to store more than the allowed amount of data for a given partition key value.`
+
+- **Cause**: The data size of each logical partition is limited, and the partition key reached the maximum size of your logical partition.
+
+- **Recommendation**: Check your Azure Cosmos DB partition design. For more information, see [Logical partitions](../cosmos-db/partitioning-overview.md#logical-partitions).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
Deployment can fail if you try to update active triggers. To update active trigg
```powershell $triggersADF = Get-AzDataFactoryV2Trigger -DataFactoryName $DataFactoryName -ResourceGroupName $ResourceGroupName-
+
$triggersADF | ForEach-Object { Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_.name -Force } ```
You can complete similar steps (with the `Start-AzDataFactoryV2Trigger` function
The data factory team has provided a [sample pre- and post-deployment script](continuous-integration-delivery-sample-script.md).
+> [!NOTE]
+> Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD.
+
+>[!WARNING]
+>Make sure to use **PowerShell Core** in ADO task to run the script
+ ## Next steps - [Continuous integration and delivery overview](continuous-integration-delivery.md)
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Previously updated : 06/08/2022 Last updated : 08/15/2022 # Automated publishing for continuous integration and delivery
npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxx
- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`. - `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
+If you would like to stop/ start only the updated triggers, instead use the below command (currently this capability is in preview and the functionality will be transparently merged into the above command during GA):
+```dos
+npm run build-preview export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
+```
+- `RootFolder` is a mandatory field that represents where the Data Factory resources are located.
+- `FactoryId` is a mandatory field that represents the Data Factory resource ID in the format `/subscriptions/<subId>/resourceGroups/<rgName>/providers/Microsoft.DataFactory/factories/<dfName>`.
+- `OutputFolder` is an optional parameter that specifies the relative path to save the generated ARM template.
> [!NOTE] > The ARM template generated isn't published to the live version of the factory. Deployment should be done by using a CI/CD pipeline. ++ ### Validate Run `npm run build validate <rootFolder> <factoryId>` to validate all the resources of a given folder. Here's an example:
Follow these steps to get started:
```json { "scripts":{
- "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index"
+ "build":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index",
+ "build-preview":"node node_modules/@microsoft/azure-data-factory-utilities/lib/index --preview"
}, "dependencies":{
- "@microsoft/azure-data-factory-utilities":"^0.1.5"
+ "@microsoft/azure-data-factory-utilities":"^1.0.0"
} } ```
Follow these steps to get started:
command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF> /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<Your-ResourceGroup-Name>/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"'
+ #For using preview that allows you to only stop/ start triggers that are modified, please comment out the above line and uncomment the below line. Make sure the package.json contains the build-preview command.
+ #customCommand: 'run build-preview export $(Build.Repository.LocalPath) /subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourceGroups/GartnerMQ2021/providers/Microsoft.DataFactory/factories/Dev-GartnerMQ2021-DataFactory "ArmTemplate"'
displayName: 'Validate and Generate ARM template' # Publish the artifact to be used as a source for a release pipeline.
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
The following sample demonstrates how to use a pre- and post-deployment script w
Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps). >[!WARNING]
->If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
->
+>Make sure to use **PowerShell Core** in ADO task to run the script
## Pre- and post-deployment script The sample scripts to stop/ start triggers and update global parameters during release process (CICD) are located in the [Azure Data Factory Official GitHub page](https://github.com/Azure/Azure-DataFactory/tree/main/SamplesV2/ContinuousIntegrationAndDelivery).
+> [!NOTE]
+> Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD.
+ ## Script execution and parameters
When running a pre-deployment script, you will need to specify a variation of th
When running a post-deployment script, you will need to specify a variation of the following parameters in the **Script Arguments** field. `-armTemplate "$(System.DefaultWorkingDirectory)/<your-arm-template-location>" -ResourceGroupName <your-resource-group-name> -DataFactoryName <your-data-factory-name> -predeployment $false -deleteDeployment $true`
-
+ > [!NOTE] > The `-deleteDeployment` flag is used to specify the deletion of the ADF deployment entry from the deployment history in ARM.
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
Previously updated : 09/29/2021 Last updated : 08/15/2022
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Git integration**. Configure only your development data factory with Git integration. Changes to test and production are deployed via CI/CD and don't need Git integration. -- **Pre- and post-deployment script**. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see [Update active triggers](continuous-integration-delivery-automate-azure-pipelines.md#updating-active-triggers). The data factory team has [provided a script](continuous-integration-delivery-sample-script.md) to use located at the bottom of this page.
+- **Pre- and post-deployment script**. Before the Resource Manager deployment step in CI/CD, you need to complete certain tasks, like stopping and restarting triggers and performing cleanup. We recommend that you use PowerShell scripts before and after the deployment task. For more information, see [Update active triggers](continuous-integration-delivery-automate-azure-pipelines.md#updating-active-triggers). The data factory team has [provided a script](continuous-integration-delivery-sample-script.md) to use located at the bottom of this page.
+
+ > [!NOTE]
+ > Use the [PrePostDeploymentScript.Ver2.ps1](https://github.com/Azure/Azure-DataFactory/blob/main/SamplesV2/ContinuousIntegrationAndDelivery/PrePostDeploymentScript.Ver2.ps1) if you would like to turn off/ on only the triggers that have been modified instead of turning all triggers off/ on during CI/CD.
+
+ >[!WARNING]
+ >Make sure to use **PowerShell Core** in ADO task to run the script.
+
+ >[!WARNING]
+ >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands.
- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md
This pipeline will copy objects from one group. The objects belonging to this gr
| UpdateWatermarkColumnValue | StoreProcedure | Write back the new watermark value to control table to be used next time. | ### Known limitations-- Copy data tool does not support metadata driven ingestion for incrementally copying new files only currently. But you can bring your own parameterized pipelines to achieve that. - IR name, database type, file format type cannot be parameterized in ADF. For example, if you want to ingest data from both Oracle Server and SQL Server, you will need two different parameterized pipelines. But the single control table can be shared by two sets of pipelines. - OPENJSON is used in generated SQL scripts by copy data tool. If you are using SQL Server to host control table, it must be SQL Server 2016 (13.x) and later in order to support OPENJSON function.
data-factory Industry Sap Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-connectors.md
Previously updated : 04/20/2022 Last updated : 08/11/2022 # SAP connectors overview
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
Previously updated : 04/20/2022 Last updated : 08/11/0222 # SAP knowledge center overview
data-factory Industry Sap Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-templates.md
Previously updated : 04/20/2022 Last updated : 08/11/2022 # SAP templates overview
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Previously updated : 06/08/2021 Last updated : 08/11/2022 # What is Azure Data Factory?
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md
Title: Iterative development and debugging description: Learn how to develop and debug Data Factory and Synapse Analytics pipelines iteratively with the service UI. Previously updated : 09/09/2021 Last updated : 08/12/2022
data-factory Join Azure Ssis Integration Runtime Virtual Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md
description: Learn how to join Azure-SSIS integration runtime to a virtual netwo
Previously updated : 02/15/2022 Last updated : 08/11/2022
data-factory Join Azure Ssis Integration Runtime Virtual Network Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md
description: Learn how to join Azure-SSIS integration runtime to a virtual netwo
Previously updated : 02/15/2022 Last updated : 08/12/2022
Use Azure portal to configure a classic virtual network before you try to join y
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/access-control-add.png" alt-text="&quot;Access control&quot; and &quot;Add&quot; buttons":::
- 1. Select **Add role assignment**.
+ 1. Select **Add**, and then **Add role assignment** from the dropdown that appears.
- 1. On the **Add role assignment** page, for **Role**, select **Classic Virtual Machine Contributor**. In the **Select** box, paste **ddbf3205-c6bd-46ae-8127-60eb93363864**, and then select **MicrosoftAzureBatch** from the list of search results.
+ 1. On the **Add role assignment** page, enter **Microsoft Azure Batch** in the search box, select the role, and select **Next**.
- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/azure-batch-to-vm-contributor.png" alt-text="Search results on the &quot;Add role assignment&quot; page":::
+ :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/add-virtual-machine-contributor-role.png" alt-text="Sreenshot showing search results for the &quot;Virtual Machine Contributor&quot; role.":::
- 1. Select **Save** to save the settings and close the page.
+ 1. On the **Members** page, under **Members** select **+ Select members**. Then on the **Select Members** pane, search for **Microsoft Azure Batch**, and select it from the list to add it, and click **Select**.
- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/save-access-settings.png" alt-text="Save access settings":::
+ :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/add-microsoft-azure-batch-user-to-role-assignment.png" alt-text="Screenshot showing the Microsoft Azure Batch service principal.":::
- 1. Confirm that you see **MicrosoftAzureBatch** in the list of contributors.
+ 1. On the **Role Assignments** page, Search for **Microsoft Azure Batch** if necessary and Confirm that you see it in the list in the **Virtual Machine Contributors** role.
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/azure-batch-in-list.png" alt-text="Confirm Azure Batch access":::
-1. Make sure that *Microsoft.Batch* is a registered resource provider in Azure subscription that has the virtual network for your Azure-SSIS IR to join. For detailed instructions, see the [Register Azure Batch as a resource provider](azure-ssis-integration-runtime-virtual-network-configuration.md#registerbatch) section.
+1. Make sure that *Microsoft.Batch* is a registered resource provider in the Azure subscription that has the virtual network for your Azure-SSIS IR to join. For detailed instructions, see the [Register Azure Batch as a resource provider](azure-ssis-integration-runtime-virtual-network-configuration.md#registerbatch) section.
## Join Azure-SSIS IR to the virtual network
After you've configured an Azure Resource Manager/classic virtual network, you c
1. Start Microsoft Edge or Google Chrome. Currently, only these web browsers support ADF UI.
-1. In [Azure portal](https://portal.azure.com), on the left-hand-side menu, select **Data factories**. If you don't see **Data factories** on the menu, select **More services**, and then in the **INTELLIGENCE + ANALYTICS** section, select **Data factories**.
+1. In the [Azure portal](https://portal.azure.com), under the **Azure Services** section, select **More Services** to see a list of all Azure services. In the **Filter services** search box, type **Data Factories**, and then choose **Data Factories** in the list of services that appear.
+
+ :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/portal-find-data-factories.png" alt-text="Screenshot of the All Services page on the Azure portal filtered for Data Factories.":::
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factories-list.png" alt-text="List of data factories":::
-1. Select your ADF with Azure-SSIS IR in the list. You see the home page for your ADF. Select the **Author & Monitor** tile. You see ADF UI on a separate tab.
+1. Select your data factory with the Azure-SSIS IR in the list. You see the home page for your data factory. Select the **Open Azure Data Factory Studio** tile. Azure Data Factory Studio will open on a separate tab.
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/data-factory-home-page.png" alt-text="Data factory home page":::
-1. In ADF UI, switch to the **Edit** tab, select **Connections**, and switch to the **Integration Runtimes** tab.
-
- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtimes-tab.png" alt-text="&quot;Integration runtimes&quot; tab":::
-
-1. If your Azure-SSIS IR is running, in the **Integration Runtimes** list, in the **Actions** column, select the **Stop** button for your Azure-SSIS IR. You can't edit your Azure-SSIS IR until you stop it.
+1. In Azure Data Factory Studio, select the **Manage** tab on the far left, and then switch to the **Integration Runtimes** tab. If your Azure-SSIS IR is running, hover over it in the list to find and select the **Stop** button, as shown below. You can't edit your Azure-SSIS IR until you stop it.
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/stop-ir-button.png" alt-text="Stop the IR":::
-1. In the **Integration Runtimes** list, in the **Actions** column, select the **Edit** button for your Azure-SSIS IR.
-
- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtime-edit.png" alt-text="Edit the integration runtime":::
+1. After your Azure-SSIS IR is stopped, select it in the **Integration Runtimes** list to edit it.
-1. On the **Integration runtime setup** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Next** button.
+1. On the **Edit integration runtime** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Continue** button.
1. On the **Advanced settings** page, complete the following steps.
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
description: Learn how to join Azure-SSIS integration runtime to a virtual netwo
Previously updated : 02/15/2022 Last updated : 08/12/2022
When joining your Azure-SSIS IR to a virtual network, remember these important p
- If a classic virtual network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure a [classic-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md) connection. -- If an Azure Resource Manager virtual network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can first create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure an [Azure Resource Manager-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection.
+- If an Azure Resource Manager network is already connected to your on-premises network in a different location from your Azure-SSIS IR, you can first create an [Azure Resource Manager virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network) for your Azure-SSIS IR to join. Then configure an [Azure Resource Manager-to-Azure Resource Manager virtual network](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection.
-## Hosting SSISDB in Azure SQL Database server or Managed Instance
+## Hosting SSISDB in Azure SQL Database server or Managed instance
If you host SSISDB in Azure SQL Database server configured with a virtual network service endpoint, make sure that you join your Azure-SSIS IR to the same virtual network and subnet.
data-factory Lab Data Flow Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/lab-data-flow-data-share.md
Previously updated : 06/04/2021 Last updated : 08/12/2022 # Data integration using Azure Data Factory and Azure Data Share
In Azure Data Factory linked services define the connection information to exter
1. Using the search bar at the top of the page, search for 'Data Factories' :::image type="content" source="media/lab-data-flow-data-share/portal1.png" alt-text="Portal 1":::
-1. Click on your data factory resource to open up its resource blade.
+1. Select your data factory resource to open up its resources on the left hand pane.
:::image type="content" source="media/lab-data-flow-data-share/portal2.png" alt-text="Portal 2":::
-1. Click on **Author and Monitor** to open up the ADF UX. The ADF UX can also be accessed at adf.azure.com.
+1. Select **Open Azure Data Factory Studio**. The Data Factory Studio can also be accessed directly at adf.azure.com.
- :::image type="content" source="media/lab-data-flow-data-share/portal3.png" alt-text="Portal 3":::
-1. You'll be redirected to the homepage of the ADF UX. This page contains quick-starts, instructional videos, and links to tutorials to learn data factory concepts. To start authoring, click on the pencil icon in left side-bar.
+ :::image type="content" source="media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of the Azure Data Factory home page in the Azure portal.":::
+
+1. You'll be redirected to the homepage of the ADF UX. This page contains quick-starts, instructional videos, and links to tutorials to learn data factory concepts. To start authoring, select the pencil icon in left side-bar.
:::image type="content" source="./media/doc-common-process/get-started-page-author-button.png" alt-text="Portal configure":::
In Azure Data Factory linked services define the connection information to exter
1. To create a linked service, select **Manage** hub in the left side-bar, on the **Connections** pane, select **Linked services** and then select **New** to add a new linked service. :::image type="content" source="media/lab-data-flow-data-share/configure2.png" alt-text="Portal configure 2":::
-1. The first linked service you'll configure is an Azure SQL DB. You can use the search bar to filter the data store list. Click on the **Azure SQL Database** tile and click continue.
+1. The first linked service you'll configure is an Azure SQL DB. You can use the search bar to filter the data store list. Select on the **Azure SQL Database** tile and select continue.
:::image type="content" source="media/lab-data-flow-data-share/configure-4.png" alt-text="Portal configure 4":::
-1. In the SQL DB configuration pane, enter 'SQLDB' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished.
+1. In the SQL DB configuration pane, enter 'SQLDB' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by selecting **Test connection**. Select **Create** when finished.
:::image type="content" source="media/lab-data-flow-data-share/configure5.png" alt-text="Portal configure 5"::: ### Create an Azure Synapse Analytics linked service
-1. Repeat the same process to add an Azure Synapse Analytics linked service. In the connections tab, click **New**. Select the **Azure Synapse Analytics** tile and click continue.
+1. Repeat the same process to add an Azure Synapse Analytics linked service. In the connections tab, select **New**. Select the **Azure Synapse Analytics** tile and select continue.
:::image type="content" source="media/lab-data-flow-data-share/configure-6.png" alt-text="Portal configure 6":::
-1. In the linked service configuration pane, enter 'SQLDW' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished.
+1. In the linked service configuration pane, enter 'SQLDW' as your linked service name. Enter in your credentials to allow data factory to connect to your database. If you're using SQL authentication, enter in the server name, the database, your user name and password. You can verify your connection information is correct by clicking **Test connection**. Select **Create** when finished.
:::image type="content" source="media/lab-data-flow-data-share/configure-7.png" alt-text="Portal configure 7"::: ### Create an Azure Data Lake Storage Gen2 linked service
-1. The last linked service needed for this lab is an Azure Data Lake Storage gen2. In the connections tab, click **New**. Select the **Azure Data Lake Storage Gen2** tile and click continue.
+1. The last linked service needed for this lab is an Azure Data Lake Storage gen2. In the connections tab, select **New**. Select the **Azure Data Lake Storage Gen2** tile and select continue.
:::image type="content" source="media/lab-data-flow-data-share/configure8.png" alt-text="Portal configure 8":::
-1. In the linked service configuration pane, enter 'ADLSGen2' as your linked service name. If you're using Account key authentication, select your ADLS Gen2 storage account from the **Storage account name** dropdown. You can verify your connection information is correct by clicking **Test connection**. Click **Create** when finished.
+1. In the linked service configuration pane, enter 'ADLSGen2' as your linked service name. If you're using Account key authentication, select your ADLS Gen2 storage account from the **Storage account name** dropdown. You can verify your connection information is correct by clicking **Test connection**. Select **Create** when finished.
:::image type="content" source="media/lab-data-flow-data-share/configure9.png" alt-text="Portal configure 9":::
In Azure Data Factory linked services define the connection information to exter
In section *Transform data using mapping data flow*, you'll be building mapping data flows. A best practice before building mapping data flows is to turn on debug mode, which allows you to test transformation logic in seconds on an active spark cluster.
-To turn on debug, click the **Data flow debug** slider in the top bar of data flow canvas or pipeline canvas when you have **Data flow** activities. Click **OK** when the confirmation dialog is shown. The cluster will start up in about 5 to 7 minutes. Continue on to *Ingest data from Azure SQL DB into ADLS Gen2 using the copy activity* while it is initializing.
+To turn on debug, select the **Data flow debug** slider in the top bar of data flow canvas or pipeline canvas when you have **Data flow** activities. Select **OK** when the confirmation dialog is shown. The cluster will start up in about 5 to 7 minutes. Continue on to *Ingest data from Azure SQL DB into ADLS Gen2 using the copy activity* while it is initializing.
:::image type="content" source="media/lab-data-flow-data-share/configure10.png" alt-text="Portal configure 10":::
In Azure Data Factory, a pipeline is a logical grouping of activities that toget
### Create a pipeline with a copy activity
-1. In the factory resources pane, click on the plus icon to open the new resource menu. Select **Pipeline**.
+1. In the factory resources pane, select on the plus icon to open the new resource menu. Select **Pipeline**.
:::image type="content" source="media/lab-data-flow-data-share/copy1.png" alt-text="Portal copy 1"::: 1. In the **General** tab of the pipeline canvas, name your pipeline something descriptive such as 'IngestAndTransformTaxiData'.
In Azure Data Factory, a pipeline is a logical grouping of activities that toget
### Configure Azure SQL DB source dataset
-1. Click on the **Source** tab of the copy activity. To create a new dataset, click **New**. Your source will be the table 'dbo.TripData' located in the linked service 'SQLDB' configured earlier.
+1. Select on the **Source** tab of the copy activity. To create a new dataset, select **New**. Your source will be the table 'dbo.TripData' located in the linked service 'SQLDB' configured earlier.
:::image type="content" source="media/lab-data-flow-data-share/copy4.png" alt-text="Portal copy 4":::
-1. Search for **Azure SQL Database** and click continue.
+1. Search for **Azure SQL Database** and select continue.
:::image type="content" source="media/lab-data-flow-data-share/copy-5.png" alt-text="Portal copy 5":::
-1. Call your dataset 'TripData'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripData' from the table name dropdown. Import the schema **From connection/store**. Click OK when finished.
+1. Call your dataset 'TripData'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripData' from the table name dropdown. Import the schema **From connection/store**. Select OK when finished.
:::image type="content" source="media/lab-data-flow-data-share/copy6.png" alt-text="Portal copy 6":::
You have successfully created your source dataset. Make sure in the source setti
### Configure ADLS Gen2 sink dataset
-1. Click on the **Sink** tab of the copy activity. To create a new dataset, click **New**.
+1. Select on the **Sink** tab of the copy activity. To create a new dataset, select **New**.
:::image type="content" source="media/lab-data-flow-data-share/copy7.png" alt-text="Portal copy 7":::
-1. Search for **Azure Data Lake Storage Gen2** and click continue.
+1. Search for **Azure Data Lake Storage Gen2** and select continue.
:::image type="content" source="media/lab-data-flow-data-share/copy8.png" alt-text="Portal copy 8":::
-1. In the select format pane, select **DelimitedText** as you're writing to a csv file. Click continue.
+1. In the select format pane, select **DelimitedText** as you're writing to a csv file. Select continue.
:::image type="content" source="media/lab-data-flow-data-share/copy9.png" alt-text="Portal copy 9":::
-1. Name your sink dataset 'TripDataCSV'. Select 'ADLSGen2' as your linked service. Enter where you want to write your csv file. For example, you can write your data to file `trip-data.csv` in container `staging-container`. Set **First row as header** to true as you want your output data to have headers. Since no file exists in the destination yet, set **Import schema** to **None**. Click OK when finished.
+1. Name your sink dataset 'TripDataCSV'. Select 'ADLSGen2' as your linked service. Enter where you want to write your csv file. For example, you can write your data to file `trip-data.csv` in container `staging-container`. Set **First row as header** to true as you want your output data to have headers. Since no file exists in the destination yet, set **Import schema** to **None**. Select OK when finished.
:::image type="content" source="media/lab-data-flow-data-share/copy10.png" alt-text="Portal copy 10"::: ### Test the copy activity with a pipeline debug run
-1. To verify your copy activity is working correctly, click **Debug** at the top of the pipeline canvas to execute a debug run. A debug run allows you to test your pipeline either end-to-end or until a breakpoint before publishing it to the data factory service.
+1. To verify your copy activity is working correctly, select **Debug** at the top of the pipeline canvas to execute a debug run. A debug run allows you to test your pipeline either end-to-end or until a breakpoint before publishing it to the data factory service.
:::image type="content" source="media/lab-data-flow-data-share/copy11.png" alt-text="Portal copy 11":::
-1. To monitor your debug run, go to the **Output** tab of the pipeline canvas. The monitoring screen will autorefresh every 20 seconds or when you manually click the refresh button. The copy activity has a special monitoring view, which can be access by clicking the eye-glasses icon in the **Actions** column.
+1. To monitor your debug run, go to the **Output** tab of the pipeline canvas. The monitoring screen will autorefresh every 20 seconds or when you manually select the refresh button. The copy activity has a special monitoring view, which can be access by clicking the eye-glasses icon in the **Actions** column.
:::image type="content" source="media/lab-data-flow-data-share/copy12.png" alt-text="Portal copy 12"::: 1. The copy monitoring view gives the activity's execution details and performance characteristics. You can see information such as data read/written, rows read/written, files read/written, and throughput. If you have configured everything correctly, you should see 49,999 rows written into one file in your ADLS sink.
The data flow created in this step inner joins the 'TripDataCSV' dataset created
1. In the activities pane of the pipeline canvas, open the **Move and Transform** accordion and drag the **Data flow** activity onto the canvas. :::image type="content" source="media/lab-data-flow-data-share/dataflow1.png" alt-text="Portal data flow 1":::
-1. In the side pane that opens, select **Create new data flow** and choose **Mapping data flow**. Click **OK**.
+1. In the side pane that opens, select **Create new data flow** and choose **Mapping data flow**. Select **OK**.
:::image type="content" source="media/lab-data-flow-data-share/dataflow2.png" alt-text="Portal data flow 2"::: 1. You'll be directed to the data flow canvas where you'll be building your transformation logic. In the general tab, name your data flow 'JoinAndAggregateData'.
The data flow created in this step inner joins the 'TripDataCSV' dataset created
### Configure your trip data csv source
-1. The first thing you want to do is configure your two source transformations. The first source will point to the 'TripDataCSV' DelimitedText dataset. To add a source transformation, click on the **Add Source** box in the canvas.
+1. The first thing you want to do is configure your two source transformations. The first source will point to the 'TripDataCSV' DelimitedText dataset. To add a source transformation, select on the **Add Source** box in the canvas.
:::image type="content" source="media/lab-data-flow-data-share/dataflow4.png" alt-text="Portal data flow 4":::
-1. Name your source 'TripDataCSV' and select the 'TripDataCSV' dataset from the source drop-down. If you remember, you didn't import a schema initially when creating this dataset as there was no data there. Since `trip-data.csv` exists now, click **Edit** to go to the dataset settings tab.
+1. Name your source 'TripDataCSV' and select the 'TripDataCSV' dataset from the source drop-down. If you remember, you didn't import a schema initially when creating this dataset as there was no data there. Since `trip-data.csv` exists now, select **Edit** to go to the dataset settings tab.
:::image type="content" source="media/lab-data-flow-data-share/dataflow5.png" alt-text="Portal data flow 5":::
-1. Go to tab **Schema** and click **Import schema**. Select **From connection/store** to import directly from the file store. 14 columns of type string should appear.
+1. Go to tab **Schema** and select **Import schema**. Select **From connection/store** to import directly from the file store. 14 columns of type string should appear.
:::image type="content" source="media/lab-data-flow-data-share/dataflow6.png" alt-text="Portal data flow 6":::
-1. Go back to data flow 'JoinAndAggregateData'. If your debug cluster has started (indicated by a green circle next to the debug slider), you can get a snapshot of the data in the **Data Preview** tab. Click **Refresh** to fetch a data preview.
+1. Go back to data flow 'JoinAndAggregateData'. If your debug cluster has started (indicated by a green circle next to the debug slider), you can get a snapshot of the data in the **Data Preview** tab. Select **Refresh** to fetch a data preview.
:::image type="content" source="media/lab-data-flow-data-share/dataflow7.png" alt-text="Portal data flow 7":::
The data flow created in this step inner joins the 'TripDataCSV' dataset created
### Configure your trip fares SQL DB source
-1. The second source you're adding will point at the SQL DB table 'dbo.TripFares'. Under your 'TripDataCSV' source, there will be another **Add Source** box. Click it to add a new source transformation.
+1. The second source you're adding will point at the SQL DB table 'dbo.TripFares'. Under your 'TripDataCSV' source, there will be another **Add Source** box. Select it to add a new source transformation.
:::image type="content" source="media/lab-data-flow-data-share/dataflow8.png" alt-text="Portal data flow 8":::
-1. Name this source 'TripFaresSQL'. Click **New** next to the source dataset field to create a new SQL DB dataset.
+1. Name this source 'TripFaresSQL'. Select **New** next to the source dataset field to create a new SQL DB dataset.
:::image type="content" source="media/lab-data-flow-data-share/dataflow9.png" alt-text="Portal data flow 9":::
-1. Select the **Azure SQL Database** tile and click continue. *Note: You may notice many of the connectors in data factory are not supported in mapping data flow. To transform data from one of these sources, ingest it into a supported source using the copy activity*.
+1. Select the **Azure SQL Database** tile and select continue. *Note: You may notice many of the connectors in data factory are not supported in mapping data flow. To transform data from one of these sources, ingest it into a supported source using the copy activity*.
:::image type="content" source="media/lab-data-flow-data-share/dataflow-10.png" alt-text="Portal data flow 10":::
-1. Call your dataset 'TripFares'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripFares' from the table name dropdown. Import the schema **From connection/store**. Click OK when finished.
+1. Call your dataset 'TripFares'. Select 'SQLDB' as your linked service. Select table name 'dbo.TripFares' from the table name dropdown. Import the schema **From connection/store**. Select OK when finished.
:::image type="content" source="media/lab-data-flow-data-share/dataflow11.png" alt-text="Portal data flow 11"::: 1. To verify your data, fetch a data preview in the **Data Preview** tab.
The data flow created in this step inner joins the 'TripDataCSV' dataset created
### Inner join TripDataCSV and TripFaresSQL
-1. To add a new transformation, click the plus icon in the bottom-right corner of 'TripDataCSV'. Under **Multiple inputs/outputs**, select **Join**.
+1. To add a new transformation, select the plus icon in the bottom-right corner of 'TripDataCSV'. Under **Multiple inputs/outputs**, select **Join**.
:::image type="content" source="media/lab-data-flow-data-share/join1.png" alt-text="Portal join 1"::: 1. Name your join transformation 'InnerJoinWithTripFares'. Select 'TripFaresSQL' from the right stream dropdown. Select **Inner** as the join type. To learn more about the different join types in mapping data flow, see [join types](./data-flow-join.md#join-types).
- Select which columns you wish to match on from each stream via the **Join conditions** dropdown. To add an additional join condition, click on the plus icon next to an existing condition. By default, all join conditions are combined with an AND operator, which means all conditions must be met for a match. In this lab, we want to match on columns `medallion`, `hack_license`, `vendor_id`, and `pickup_datetime`
+ Select which columns you wish to match on from each stream via the **Join conditions** dropdown. To add an additional join condition, select on the plus icon next to an existing condition. By default, all join conditions are combined with an AND operator, which means all conditions must be met for a match. In this lab, we want to match on columns `medallion`, `hack_license`, `vendor_id`, and `pickup_datetime`
:::image type="content" source="media/lab-data-flow-data-share/join2.png" alt-text="Portal join 2"::: 1. Verify you successfully joined 25 columns together with a data preview.
The data flow created in this step inner joins the 'TripDataCSV' dataset created
First, you'll create the average fare expression. In the text box labeled **Add or select a column**, enter 'average_fare'. :::image type="content" source="media/lab-data-flow-data-share/agg3.png" alt-text="Portal agg 3":::
-1. To enter an aggregation expression, click the blue box labeled **Enter expression**. This will open up the data flow expression builder, a tool used to visually create data flow expressions using input schema, built-in functions and operations, and user-defined parameters. For more information on the capabilities of the expression builder, see the [expression builder documentation](./concepts-data-flow-expression-builder.md).
+1. To enter an aggregation expression, select the blue box labeled **Enter expression**. This will open up the data flow expression builder, a tool used to visually create data flow expressions using input schema, built-in functions and operations, and user-defined parameters. For more information on the capabilities of the expression builder, see the [expression builder documentation](./concepts-data-flow-expression-builder.md).
- To get the average fare, use the `avg()` aggregation function to aggregate the `total_amount` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `avg(toInteger(total_amount))`. Click **Save and finish** when you're done.
+ To get the average fare, use the `avg()` aggregation function to aggregate the `total_amount` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `avg(toInteger(total_amount))`. Select **Save and finish** when you're done.
:::image type="content" source="media/lab-data-flow-data-share/agg4.png" alt-text="Portal agg 4":::
-1. To add an additional aggregation expression, click on the plus icon next to `average_fare`. Select **Add column**.
+1. To add an additional aggregation expression, select on the plus icon next to `average_fare`. Select **Add column**.
:::image type="content" source="media/lab-data-flow-data-share/agg5.png" alt-text="Portal agg 5"::: 1. In the text box labeled **Add or select a column**, enter 'total_trip_distance'. As in the last step, open the expression builder to enter in the expression.
- To get the total trip distance, use the `sum()` aggregation function to aggregate the `trip_distance` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `sum(toInteger(trip_distance))`. Click **Save and finish** when you're done.
+ To get the total trip distance, use the `sum()` aggregation function to aggregate the `trip_distance` column cast to an integer with `toInteger()`. In the data flow expression language, this is defined as `sum(toInteger(trip_distance))`. Select **Save and finish** when you're done.
:::image type="content" source="media/lab-data-flow-data-share/agg6.png" alt-text="Portal agg 6"::: 1. Test your transformation logic in the **Data Preview** tab. As you can see, there are significantly fewer rows and columns than previously. Only the three groups by and aggregation columns defined in this transformation continue downstream. As there are only five payment type groups in the sample, only five rows are outputted.
The data flow created in this step inner joins the 'TripDataCSV' dataset created
1. Now that we have finished our transformation logic, we are ready to sink our data in an Azure Synapse Analytics table. Add a sink transformation under the **Destination** section. :::image type="content" source="media/lab-data-flow-data-share/sink1.png" alt-text="Portal sink 1":::
-1. Name your sink 'SQLDWSink'. Click **New** next to the sink dataset field to create a new Azure Synapse Analytics dataset.
+1. Name your sink 'SQLDWSink'. Select **New** next to the sink dataset field to create a new Azure Synapse Analytics dataset.
:::image type="content" source="media/lab-data-flow-data-share/sink2.png" alt-text="Portal sink 2":::
-1. Select the **Azure Synapse Analytics** tile and click continue.
+1. Select the **Azure Synapse Analytics** tile and select continue.
:::image type="content" source="media/lab-data-flow-data-share/sink-3.png" alt-text="Portal sink 3":::
-1. Call your dataset 'AggregatedTaxiData'. Select 'SQLDW' as your linked service. Select **Create new table** and name the new table dbo.AggregateTaxiData. Click OK when finished
+1. Call your dataset 'AggregatedTaxiData'. Select 'SQLDW' as your linked service. Select **Create new table** and name the new table dbo.AggregateTaxiData. Select OK when finished
:::image type="content" source="media/lab-data-flow-data-share/sink4.png" alt-text="Portal sink 4"::: 1. Go to the **Settings** tab of the sink. Since we are creating a new table, we need to select **Recreate table** under table action. Unselect **Enable staging**, which toggles whether we are inserting row-by-row or in batch.
You have successfully created your data flow. Now it's time to run it in a pipel
1. Go back to the tab for the **IngestAndTransformData** pipeline. Notice the green box on the 'IngestIntoADLS' copy activity. Drag it over to the 'JoinAndAggregateData' data flow activity. This creates an 'on success', which causes the data flow activity to only run if the copy is successful. :::image type="content" source="media/lab-data-flow-data-share/pipeline1.png" alt-text="Portal pipeline 1":::
-1. As we did for the copy activity, click **Debug** to execute a debug run. For debug runs, the data flow activity will use the active debug cluster instead of spinning up a new cluster. This pipeline will take a little over a minute to execute.
+1. As we did for the copy activity, select **Debug** to execute a debug run. For debug runs, the data flow activity will use the active debug cluster instead of spinning up a new cluster. This pipeline will take a little over a minute to execute.
:::image type="content" source="media/lab-data-flow-data-share/pipeline2.png" alt-text="Portal pipeline 2"::: 1. Like the copy activity, the data flow has a special monitoring view accessed by the eyeglasses icon on completion of the activity.
You have successfully created your data flow. Now it's time to run it in a pipel
1. In the monitoring view, you can see a simplified data flow graph along with the execution times and rows at each execution stage. If done correctly, you should have aggregated 49,999 rows into five rows in this activity. :::image type="content" source="media/lab-data-flow-data-share/pipeline4.png" alt-text="Portal pipeline 4":::
-1. You can click a transformation to get additional details on its execution such as partitioning information and new/updated/dropped columns.
+1. You can select a transformation to get additional details on its execution such as partitioning information and new/updated/dropped columns.
:::image type="content" source="media/lab-data-flow-data-share/pipeline5.png" alt-text="Portal pipeline 5":::
Once you have created a data share, you'll then switch hats and become the *data
> [!IMPORTANT] > Before running the script, you must set yourself as the Active Directory Admin for the SQL Server.
-1. Open a new tab and navigate to the Azure portal. Copy the script provided to create a user in the database that you want to share data from. Do this by logging into the EDW database using Query Explorer (preview) using AAD authentication.
+1. Open a new tab and navigate to the Azure portal. Copy the script provided to create a user in the database that you want to share data from. Do this by logging into the EDW database using Query Explorer (preview) using Azure AD authentication.
You'll need to modify the script so that the user created is contained within brackets. Eg:
- create user [dataprovider-xxxx] from external login;
+ create user [dataprovider-xxxx] from external log in;
exec sp_addrolemember db_owner, [dataprovider-xxxx]; 1. Switch back to Azure Data Share where you were adding datasets to your data share.
Once you have created a data share, you'll then switch hats and become the *data
1. Select the data share that you created, titled **DataProvider**. You can navigate to it by selecting **Sent Shares** in **Data Share**.
-1. Click on Snapshot schedule. You can disable the snapshot schedule if you choose.
+1. Select on Snapshot schedule. You can disable the snapshot schedule if you choose.
1. Next, select the **Datasets** tab. You can add additional datasets to this data share after it has been created.
Once you have created a data share, you'll then switch hats and become the *data
Now that we have reviewed our data share, we are ready to switch context and wear our data consumer hat.
-You should now have an Azure Data Share invitation in your inbox from Microsoft Azure. Launch Outlook Web Access (outlook.com) and log in using the credentials supplied for your Azure subscription.
+You should now have an Azure Data Share invitation in your inbox from Microsoft Azure. Launch Outlook Web Access (outlook.com) and log on using the credentials supplied for your Azure subscription.
-In the e-mail that you should have received, click on "View invitation >". At this point, you're going to be simulating the data consumer experience when accepting a data providers invitation to their data share.
+In the e-mail that you should have received, select on "View invitation >". At this point, you're going to be simulating the data consumer experience when accepting a data providers invitation to their data share.
:::image type="content" source="media/lab-data-flow-data-share/email-invite.png" alt-text="Email invitation"::: You may be prompted to select a subscription. Make sure you select the subscription you have been working in for this lab.
-1. Click on the invitation titled *DataProvider*.
+1. Select on the invitation titled *DataProvider*.
1. In this Invitation screen, you'll notice various details about the data share that you configured earlier as a data provider. Review the details and accept the terms of use if provided.
You may be prompted to select a subscription. Make sure you select the subscript
1. Select **Query editor (preview)**
-1. Use AAD authentication to log in to Query editor.
+1. Use Azure AD authentication to log on to Query editor.
1. Run the query provided in your data share (copied to clipboard in step 14).
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
This article shows you how to use the Data Factory copy data tool to copy data f
## Create a data factory
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Screenshot showing the Data Factory selection in the New pane.":::
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
-2. On the **New data factory** page, provide values for the fields that are shown in the following image:
-
- :::image type="content" source="./media/load-azure-data-lake-storage-gen2-from-gen1/new-azure-data-factory.png" alt-text="Screenshot showing the New Data factory page.":::
-
- * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadADLSDemo\" is not available," enter a different name for the data factory. For example, use the name _**yourname**_**ADFTutorialDataFactory**. Create the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
- * **Subscription**: Select your Azure subscription in which to create the data factory.
- * **Resource Group**: Select an existing resource group from the drop-down list. You also can select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
- * **Version**: Select **V2**.
- * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by the data factory can be in other locations and regions.
-
-3. Select **Create**.
-4. After creation is finished, go to your data factory. You see the **Data Factory** home page as shown in the following image:
-
:::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Load data into Azure Data Lake Storage Gen2
As a best practice, conduct a performance POC with a representative sample datas
3. If you have maximized the performance of a single copy activity, but have not yet achieved the throughput upper limits of your environment, you can run multiple copy activities in parallel.
-When you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries impact your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the DIU.
+When you see significant number of throttling errors from [copy activity monitoring](copy-activity-monitoring.md#monitor-visually), it indicates you have reached the capacity limit of your storage account. ADF will retry automatically to overcome each throttling error to make sure there will not be any data lost, but too many retries can degrade your copy throughput as well. In such case, you are encouraged to reduce the number of copy activities running cocurrently to avoid significant amounts of throttling errors. If you have been using single copy activity to copy data, then you are encouraged to reduce the DIU.
### Delta data migration
data-factory Load Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2.md
Previously updated : 07/22/2021 Last updated : 08/12/2022 # Load data into Azure Data Lake Storage Gen2 with Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to load data f
## Create a data factory
-1. On the left menu, select **Create a resource** > **Integration** > **Data Factory**:
-
- :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
-2. In the **New data factory** page, provide values for following fields:
-
- * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name *YourDataFactoryName* is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
- * **Subscription**: Select your Azure subscription in which to create the data factory.
- * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
- * **Version**: Select **V2**.
- * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions.
-
-3. Select **Create**.
-
-4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
-
:::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Load data into Azure Data Lake Storage Gen2
data-factory Load Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-store.md
Previously updated : 07/05/2021 Last updated : 08/12/2022 # Load data into Azure Data Lake Storage Gen1 by using Azure Data Factory
This article shows you how to use the Data Factory Copy Data tool to _load data
## Create a data factory
-1. On the left menu, select **Create a resource** > **Analytics** > **Data Factory**:
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
-
-2. In the **New data factory** page, provide values for the fields that are shown in the following image:
-
- :::image type="content" source="./media/load-data-into-azure-data-lake-store//new-azure-data-factory.png" alt-text="New data factory page":::
-
- * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name \"LoadADLSG1Demo\" is not available," enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
- * **Subscription**: Select your Azure subscription in which to create the data factory.
- * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
- * **Version**: Select **V2**.
- * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Storage Gen1, Azure Storage, Azure SQL Database, and so on.
-
-3. Select **Create**.
-4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
-
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Load data into Data Lake Storage Gen1
This article shows you how to use the Data Factory Copy Data tool to _load data
2. In the **Properties** page, specify **CopyFromAmazonS3ToADLS** for the **Task name** field, and select **Next**: :::image type="content" source="./media/load-data-into-azure-data-lake-store/copy-data-tool-properties-page.png" alt-text="Properties page":::
-3. In the **Source data store** page, click **+ Create new connection**:
+3. In the **Source data store** page, select **+ Create new connection**:
:::image type="content" source="./media/load-data-into-azure-data-lake-store/source-data-store-page.png" alt-text="Source data store page":::
This article shows you how to use the Data Factory Copy Data tool to _load data
:::image type="content" source="./media/load-data-into-azure-data-lake-store/specify-binary-copy.png" alt-text="Screenshot shows the Choose the input file or folder where you can select Copy file recursively and Binary Copy.":::
-7. In the **Destination data store** page, click **+ Create new connection**, and then select **Azure Data Lake Storage Gen1**, and select **Continue**:
+7. In the **Destination data store** page, select **+ Create new connection**, and then select **Azure Data Lake Storage Gen1**, and select **Continue**:
:::image type="content" source="./media/load-data-into-azure-data-lake-store/destination-data-storage-page.png" alt-text="Destination data store page":::
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-sql-data-warehouse.md
Previously updated : 09/09/2021 Last updated : 08/12/2022 # Load data into Azure Synapse Analytics using Azure Data Factory or a Synapse pipeline
This article shows you how to use the Copy Data tool to _load data from Azure SQ
## Create a data factory
-> [!NOTE]
-> You can skip the creation of a new data factory if you wish to use the pipelines feature within your existing Synapse workspace to load the data. Azure Synapse embeds the functionality of Azure Data Factory within its pipelines feature.
-
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
-
-2. On the **New data factory** page, provide values for following items:
-
- * **Name**: Enter *LoadSQLDWDemo* for name. The name for your data factory must be *globally unique. If you receive the error "Data factory name 'LoadSQLDWDemo' is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**ADFTutorialDataFactory**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
- * **Subscription**: Select your Azure subscription in which to create the data factory.
- * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
- * **Version**: Select **V2**.
- * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Store, Azure Storage, Azure SQL Database, and so on.
-
-3. Select **Create**.
-4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Load data into Azure Synapse Analytics
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-office-365-data.md
Previously updated : 08/04/2022 Last updated : 08/12/2022
This article shows you how to use the Data Factory _load data from Microsoft 365
## Create a data factory
-1. On the left menu, select **Create a resource** > **Analytics** > **Data Factory**:
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
-2. In the **New data factory** page, provide values for the fields that are shown in the following image:
-
- :::image type="content" source="./media/load-office-365-data/new-azure-data-factory.png" alt-text="New data factory page":::
-
- * **Name**: Enter a globally unique name for your Azure data factory. If you receive the error "Data factory name *LoadFromOffice365Demo* is not available", enter a different name for the data factory. For example, you could use the name _**yourname**_**LoadFromOffice365Demo**. Try creating the data factory again. For the naming rules for Data Factory artifacts, see [Data Factory naming rules](naming-rules.md).
- * **Subscription**: Select your Azure subscription in which to create the data factory.
- * **Resource Group**: Select an existing resource group from the drop-down list, or select the **Create new** option and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
- * **Version**: Select **V2**.
- * **Location**: Select the location for the data factory. Only supported locations are displayed in the drop-down list. The data stores that are used by data factory can be in other locations and regions. These data stores include Azure Data Lake Store, Azure Storage, Azure SQL Database, and so on.
-
-3. Select **Create**.
-4. After creation is complete, go to your data factory. You see the **Data Factory** home page as shown in the following image:
-
:::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-5. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration Application in a separate tab.
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Create a pipeline
This article shows you how to use the Data Factory _load data from Microsoft 365
### Configure source
-1. Go to the pipeline > **Source tab**, click **+ New** to create a source dataset.
+1. Go to the pipeline > **Source tab**, select **+ New** to create a source dataset.
2. In the New Dataset window, select **Microsoft 365 (Office 365)**, and then select **Continue**.
-3. You are now in the copy activity configuration tab. Click on the **Edit** button next to the Microsoft 365 (Office 365) dataset to continue the data configuration.
+3. You are now in the copy activity configuration tab. Select on the **Edit** button next to the Microsoft 365 (Office 365) dataset to continue the data configuration.
:::image type="content" source="./media/load-office-365-data/transition-to-edit-dataset.png" alt-text="Config Microsoft 365 (Office 365) dataset general."::: 4. You see a new tab opened for Microsoft 365 (Office 365) dataset. In the **General tab** at the bottom of the Properties window, enter "SourceOffice365Dataset" for Name.
-5. Go to the **Connection tab** of the Properties window. Next to the Linked service text box, click **+ New**.
+5. Go to the **Connection tab** of the Properties window. Next to the Linked service text box, select **+ New**.
6. In the New Linked Service window, enter "Office365LinkedService" as name, enter the service principal ID and service principal key, then test connection and select **Create** to deploy the linked service.
This article shows you how to use the Data Factory _load data from Microsoft 365
9. You are required to choose one of the date filters and provide the start time and end time values.
-10. Click on the **Import Schema** tab to import the schema for Message dataset.
+10. Select on the **Import Schema** tab to import the schema for Message dataset.
:::image type="content" source="./media/load-office-365-data/edit-source-properties.png" alt-text="Config Microsoft 365 (Office 365) dataset schema.":::
This article shows you how to use the Data Factory _load data from Microsoft 365
2. In the New Dataset window, notice that only the supported destinations are selected when copying from Microsoft 365 (Office 365). Select **Azure Blob Storage**, select Binary format, and then select **Continue**. In this tutorial, you copy Microsoft 365 (Office 365) data into an Azure Blob Storage.
-3. Click on **Edit** button next to the Azure Blob Storage dataset to continue the data configuration.
+3. Select on **Edit** button next to the Azure Blob Storage dataset to continue the data configuration.
4. On the **General tab** of the Properties window, in Name, enter "OutputBlobDataset".
To see activity runs associated with the pipeline run, select the **View Activit
:::image type="content" source="./media/load-office-365-data/activity-status.png" alt-text="Monitor activity":::
-If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as **In Progress**, and only when you click into "Details" link under Actions will you see the status as **RequesetingConsent**. A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
+If this is the first time you are requesting data for this context (a combination of which data table is being access, which destination account is the data being loaded into, and which user identity is making the data access request), you will see the copy activity status as **In Progress**, and only when you select into "Details" link under Actions will you see the status as **RequesetingConsent**. A member of the data access approver group needs to approve the request in the Privileged Access Management before the data extraction can proceed.
_Status as requesting consent:_ :::image type="content" source="./media/load-office-365-data/activity-details-request-consent.png" alt-text="Activity execution details - request consent":::
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-sap-bw-data.md
Previously updated : 09/09/2021 Last updated : 08/12/2022 # Copy data from SAP Business Warehouse with Azure Data Factory or Synapse Analytics
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
description: Learn how to reconfigure an Azure-SSIS integration runtime in Azure
Previously updated : 02/17/2022 Last updated : 08/12/2022
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 06/24/2022 Last updated : 08/12/2022 # Azure Data Factory managed virtual network
Unlike copy activity, pipeline and external activity have a default time to live
### Comparison of different TTL The following table lists the differences between different types of TTL:
-| | Interactive authoring | Copy compute scale | Pipeline & External compute scale |
+| Feature | Interactive authoring | Copy compute scale | Pipeline & External compute scale |
| -- | - | -- | | | When to take effect | Immediately after enablement | First activity execution | First activity execution | | Can be disabled | Y | Y | N |
data-factory Monitor Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-configure-diagnostics.md
Previously updated : 09/02/2021 Last updated : 08/12/2022 # Configure diagnostic settings and a workspace Create or add diagnostic settings for your data factory.
-1. In the Azure portal, go to **Monitor**. Select **Settings** > **Diagnostics settings**.
-
-1. Select the data factory for which you want to set a diagnostic setting.
-
-1. If no settings exist on the selected data factory, you're prompted to create a setting. Select **Turn on diagnostics**.
-
- :::image type="content" source="media/data-factory-monitor-oms/monitor-oms-image1.png" alt-text="Screenshot that shows creating a diagnostic setting if no settings exist.":::
-
- If there are existing settings on the data factory, you see a list of settings already configured on the data factory. Select **Add diagnostic setting**.
+1. In the Azure portal, navigate to your data factory and select **Diagnostics** on the left navigation pane to see the diagnostics settings. If there are existing settings on the data factory, you see a list of settings already configured. Select **Add diagnostic setting**.
:::image type="content" source="media/data-factory-monitor-oms/add-diagnostic-setting.png" alt-text="Screenshot that shows adding a diagnostic setting if settings exist.":::
data-factory Monitor Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-integration-runtime.md
description: Learn how to monitor different types of integration runtime in Azur
Previously updated : 10/27/2021 Last updated : 08/12/2022
The **NODE SIZE** informational tile shows the SKU (SSIS edition_VM tier_VM seri
The **RUNNING / REQUESTED NODE(S)** informational tile compares the number of nodes currently running to the total number of nodes previously requested for your Azure-SSIS IR.
-The **DUAL STANDBY PAIR / ROLE** informational tile shows the name of your dual standby Azure-SSIS IR pair that works in sync with Azure SQL Database managed instance failover group for business continuity and disaster recovery (BCDR) and the current primary/secondary role of your Azure-SSIS IR. When SSISDB failover occurs, your primary and secondary Azure-SSIS IRs will swap roles (see [Configuring your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md)).
+The **DUAL STANDBY PAIR / ROLE** informational tile shows the name of your dual standby Azure-SSIS IR pair that works in sync with Azure SQL Managed Instance failover group for business continuity and disaster recovery (BCDR) and the current primary/secondary role of your Azure-SSIS IR. When SSISDB failover occurs, your primary and secondary Azure-SSIS IRs will swap roles (see [Configuring your Azure-SSIS IR for BCDR](./configure-bcdr-azure-ssis-integration-runtime.md)).
The functional tiles are described in more details below.
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
description: Learn how to monitor a pipeline in a data factory by using differen
Previously updated : 01/26/2022 Last updated : 08/12/2022
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Tutorial Deploy Ssis Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-virtual-network.md
After you've configured a virtual network, you can join your Azure-SSIS IR to th
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/stop-ir-button.png" alt-text="Stop the IR":::
-1. In the **Integration Runtimes** list, in the **Actions** column, select the **Edit** button for your Azure-SSIS IR.
-
- :::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/integration-runtime-edit.png" alt-text="Edit the integration runtime":::
+1. In the **Integration Runtimes** list, in the **Actions** column, select your Azure-SSIS IR to edit it.
1. On the **Integration runtime setup** pane, advance through the **General settings** and **Deployment settings** pages by selecting the **Next** button.
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
ddos-protection Manage Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-permissions.md
To enable DDoS protection for a virtual network, your account must also be assig
Creation of more than one plan is not required for most organizations. A plan cannot be moved between subscriptions. If you want to change the subscription a plan is in, you have to delete the existing plan and create a new one.
-For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection Standard plans](https://aka.ms/ddosrestrictplan). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance.
+For customers who have various subscriptions, and who want to ensure a single plan is deployed across their tenant for cost control, you can use Azure Policy to [restrict creation of Azure DDoS Protection Standard plans](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Policy%20Definitions/Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy). This policy will block the creation of any DDoS plans, unless the subscription has been previously marked as an exception. This policy will also show a list of all subscriptions that have a DDoS plan deployed but should not, marking them as out of compliance.
## Next steps
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 08/08/2022 Last updated : 08/16/2022
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
The Azure Monitor Agent requires additional extensions. The ASA extension, which
### Additional security events collection
-When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Log Analytics agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel.
+When you auto-provision the Log Analytics agent in Defender for Cloud, you can choose to collect additional security events to the workspace. When you auto-provision the Azure Monitor agent in Defender for Cloud, the option to collect additional security events to the workspace isn't available. Defender for Cloud doesn't rely on these security events, but they can be helpful for investigations through Microsoft Sentinel.
-If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](/azure-monitor/essentials/data-collection-rule-overview) to collect the required events.
+If you want to collect security events when you auto-provision the Azure Monitor Agent, you can create a [Data Collection Rule](/azure/azure-monitor/essentials/data-collection-rule-overview) to collect the required events.
Like for Log Analytics workspaces, Defender for Cloud users are eligible for [500-MB of free data](enhanced-security-features-overview.md#faqpricing-and-billing) daily on defined data types that include security events.
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Last updated 07/28/2022
# Enable Microsoft Defender for SQL servers on machines
-This Microsoft Defender plan detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
+This Microsoft Defender plan detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases on the SQL server.
You'll see alerts when there are suspicious database activities, potential vulnerabilities, or SQL injection attacks, and anomalous database access and query patterns.
Microsoft Defender for SQL servers on machines extends the protections for your
- On-premises SQL servers:
- - [Azure Arc-enabled SQL Server (preview)](/sql/sql-server/azure-arc/overview)
+ - [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview)
- [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
We recommend enabling auto provisioning, but it's disabled by default.
## How does auto provisioning work?
-Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type.
+Defender for Cloud's auto provisioning settings page has a toggle for each type of supported extension. When you enable auto provisioning of an extension, you assign the appropriate **Deploy if not exists** policy. This policy type ensures the extension is provisioned on all existing and future resources of that type.
> [!TIP]
-> Learn more about Azure Policy effects including deploy if not exists in [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
+> Learn more about Azure Policy effects including **Deploy if not exists** in [Understand Azure Policy effects](../governance/policy/concepts/effects.md).
<a name="auto-provision-mma"></a>
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
You can check out the following blogs:
## Next steps
-Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
+Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages:
- [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md). - [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Yes. To create, edit, or delete Defender for Cloud cloud connectors with a REST
## Next steps
-Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following page:
+Connecting your GCP project is part of the multicloud experience available in Microsoft Defender for Cloud. For related information, see the following pages:
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)-- [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy)--Learn about the Google Cloud resource hierarchy in Google's online docs
+- [Google Cloud resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy) - Learn about the Google Cloud resource hierarchy in Google's online docs
+- [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector)
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Delete all sensors that are associated with the subscription prior to removing t
> [!NOTE] > To remove Enterprise IoT only from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
+> [!IMPORTANT]
+> If you are a Microsoft Defender for IoT customer and also have a subscription to Microsoft Defender for Endpoint, the data collected by Microsoft Defender for IoT will automatically populate in your Microsoft Defender for Endpoint instance as well. Customers who want to delete their data from Defender for IoT must also delete their data from Defender for Endpoint.
+ ## Move existing sensors to a different subscription Business considerations may require that you apply your existing IoT sensors to a different subscription than the one youΓÇÖre currently using. To do this, you'll need to onboard a new plan and register the sensors under the new subscription, and then remove them from the old subscription. This process may include some downtime, and historic data isn't migrated.
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
This section describes on-premises management console APIs for:
### Version 3 -- [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](#servicenow-integration-apiexternalv3integration-preview)
+- [Request devices - /external/v3/integration/devices/{timestamp}](#request-devicesexternalv3integrationdevicestimestamp)
+
+- [Request device connection events - /external/v3/integration/connections/{timestamp}](#request-device-connection-eventsexternalv3integrationconnectionstimestamp)
+
+- [Request device data by device ID - /external/v3/integration/device/{deviceId}](#request-device-data-by-device-idexternalv3integrationdevicedeviceid)
+
+- [Request deleted devices - /external/v3/integration/deleteddevices/{timestamp}](#request-deleted-devicesexternalv3integrationdeleteddevicestimestamp)
+
+- [Request sensor data - external/v3/integration/sensors](#request-sensor-dataexternalv3integrationsensors)
+
+- [Request all device CVEs - /external/v3/integration/devicecves/{timestamp}](#request-all-device-cvesexternalv3integrationdevicecvestimestamp)
All parameters in Version 3 APIs are optional.
Example:
|-|-|-| |GET|`curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/external/v2/alerts/pcap/<ID>'`|`curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https://10.1.0.1/external/v2/alerts/pcap/1'`
-### ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)
+### Request devices - /external/v3/integration/devices/{timestamp}
-The below API's can be used with the ServiceNow integration via the ServiceNow's Service Graph Connector for Defender for IoT.
+This API returns data about all devices that were updated after the given timestamp.
-### devices
+#### Method
-This API returns data about all devices that were updated after the given timestamp.
+- **GET**
+
+#### Path parameters
-#### Request
+- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned.
-- Path: ΓÇ£/devices/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:
- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned.
+#### Query parameters
-- Query parameters:
- - ΓÇ£**sensorId**ΓÇ¥ - use this parameter to get only devices seen by a specific sensor. The ID should be taken from the results of the Sensors API.
- - ΓÇ£**notificationType**ΓÇ¥ - should be a number, from the following mapping:
- - 0 ΓÇô both updated and new devices (default).
- - 1 ΓÇô only new devices.
- - 2 ΓÇô only updated devices.
- - ΓÇ£**page**ΓÇ¥ - the page number, from the result set (first page is 0, default value is 0)
- - ΓÇ£**size**ΓÇ¥ - the page size (default value is 50)
+- **sensorId** - use this parameter to get only devices seen by a specific sensor. The ID should be taken from the results of the [sensor](#request-sensor-dataexternalv3integrationsensors) API.
+- **notificationType** - should be a number, from the following mapping:
+ - **0** ΓÇô both updated and new devices (default).
+ - **1** ΓÇô only new devices.
+ - **2** ΓÇô only updated devices.
+- **page** - the page number, from the result set (first page is 0, default value is 0).
+- **size** - the page size (default value is 50).
-#### Response
+#### Response type
-- Type: JSON-- Structure:
- - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages.
- - ΓÇ£**u_devices**ΓÇ¥ - array of device objects. Each object is defined with the parameters listed in the [device](#device) API.
+- **JSON**
-### Connections
+#### Response structure
-This API returns data about all device connections that were updated after the given timestamp.
+- **u_count** - amount of objects in the full result sets, including all pages.
+- **u_devices** - array of device objects. Each object is defined with the parameters listed in the [device ID](#request-device-data-by-device-idexternalv3integrationdevicedeviceid) API.
-#### Request
+### Request device connection events - /external/v3/integration/connections/{timestamp}
-- Path: ΓÇ£/connections/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:
- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned.
-- Query parameters:
- - ΓÇ£**page**ΓÇ¥ - the page number, from the result set (default value is 1)
- - ΓÇ£**size**ΓÇ¥ - the page size (default value is 50)
+This API returns data about all device connection events that were updated after the given timestamp.
-#### Response
+#### Method
+
+- **GET**
+
+#### Path parameters
+
+- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned.
+
+#### Query parameters
+
+- **page** - the page number, from the result set (default value is 1).
+- **size** - the page size (default value is 50).
-- Type: JSON-- Structure:
- - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages.
- - ΓÇ£**u_connections**ΓÇ¥ - array of
- - ΓÇ£**u_src_device_id**ΓÇ¥ - the ID of the source device.
- - ΓÇ£**u_dest_device_id**ΓÇ¥ - the ID of the destination device.
- - ΓÇ£**u_connection_type**ΓÇ¥ - one of the following:
- - ΓÇ£**One Way**ΓÇ¥
- - ΓÇ£**Two Way**ΓÇ¥
- - ΓÇ£**Multicast**ΓÇ¥
+#### Response type
+
+- **JSON**
+
+#### Response structure
+
+- **u_count** - amount of object in the full result sets, including all pages.
+- **u_connections** - array of:
+ - **u_src_device_id** - the ID of the source device.
+ - **u_dest_device_id** - the ID of the destination device.
+ - **u_connection_type** - one of the following:
+ - **One Way**
+ - **Two Way**
+ - **Multicast**
-### device
+### Request device data by device ID - /external/v3/integration/device/{deviceId}
This API returns data about a specific device per a given device ID.
-#### Request
+#### Method
-- Path: ΓÇ£/device/{deviceId}ΓÇ¥-- Method type: GET-- Path parameters:
- - ΓÇ£**deviceId**ΓÇ¥ ΓÇô the ID of the requested device.
+- **GET**
+
+#### Path parameters
+
+- **deviceId** ΓÇô the ID of the requested device.
#### Response -- Type: JSON-- Structure:
- - ΓÇ£**u_id**ΓÇ¥ - the internal ID of the device.
- - ΓÇ£**u_vendor**ΓÇ¥ - the name of the vendor.
- - ΓÇ£**u_mac_address_objects**ΓÇ¥ - array of
- - ΓÇ£**u_mac_address**ΓÇ¥ - mac address of the device.
- - ΓÇ£**u_ip_address_objects**ΓÇ¥ - array of
- - ΓÇ£**u_ip_address**ΓÇ¥ - IP address of the device.
- - ΓÇ£**u_guessed_mac_addresses**ΓÇ¥ - array of
- - ΓÇ£**u_mac_address**ΓÇ¥ - guessed mac address.
- - ΓÇ£**u_name**ΓÇ¥ - the name of the device.
- - ΓÇ£**u_last_activity**ΓÇ¥ - timestamp of the last time the device was active.
- - ΓÇ£**u_first_discovered**ΓÇ¥ - timestamp of the discovery time of the device.
- - ΓÇ£**u_last_update**ΓÇ¥ - timestamp of the last update time of the device.
- - ΓÇ£**u_vlans**ΓÇ¥ - array of
- - ΓÇ£**u_vlan**ΓÇ¥ - vlan in which the device is in.
- - ΓÇ£**u_device_type**ΓÇ¥ -
- - ΓÇ£**u_name**ΓÇ¥ - the device type
- - ΓÇ£**u_purdue_layer**ΓÇ¥ - the default purdue layer for this device type.
- - ΓÇ£**u_category**ΓÇ¥ - will be one of the following:
- - ΓÇ£**IT**ΓÇ¥
- - ΓÇ£**ICS**ΓÇ¥
- - ΓÇ£**IoT**ΓÇ¥
- - ΓÇ£**Network**ΓÇ¥
- - ΓÇ£**u_operating_system**ΓÇ¥ - the device operating system.
- - ΓÇ£**u_protocol_objects**ΓÇ¥ - array of
- - ΓÇ£**u_protocol**ΓÇ¥ - protocol the device uses.
- - ΓÇ£**u_purdue_layer**ΓÇ¥ - the purdue layer that was manually set by the user.
- - ΓÇ£**u_sensor_ids**ΓÇ¥ - array of
- - ΓÇ£**u_sensor_id**ΓÇ¥ - the ID of the sensor that saw the device.
- - ΓÇ£**u_device_urls**ΓÇ¥ - array of
- - ΓÇ£**u_device_url**ΓÇ¥ the URL to view the device in the sensor.
- - ΓÇ£**u_firmwares**ΓÇ¥ - array of
- - ΓÇ£**u_address**ΓÇ¥
- - ΓÇ£**u_module_address**ΓÇ¥
- - ΓÇ£**u_serial**ΓÇ¥
- - ΓÇ£**u_model**ΓÇ¥
- - ΓÇ£**u_version**ΓÇ¥
- - ΓÇ£**u_additional_data**"
-
-### Deleted devices
-
-#### Request
--- Path: ΓÇ£/deleteddevices/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:
- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned.
+- **JSON**
+
+#### Response Structure
+
+- **u_id** - the internal ID of the device.
+- **u_vendor** - the name of the vendor.
+- **u_mac_address_objects** - array of:
+ - **u_mac_address** - mac address of the device.
+- **u_ip_address_objects** - array of:
+ - **u_ip_address** - IP address of the device.
+ - **u_guessed_mac_addresses** - array of:
+ - ΓÇ£**u_mac_address** - guessed mac address.
+- **u_name** - the name of the device.
+- **u_last_activity** - timestamp of the last time the device was active.
+- **u_first_discovered** - timestamp of the discovery time of the device.
+- **u_last_update** - timestamp of the last update time of the device.
+- **u_vlans** - array of:
+ - **u_vlan** - vlan in which the device is in.
+- **u_device_type** - array of:
+ - **u_name** - the device type.
+ - **u_purdue_layer** - the default purdue layer for this device type.
+ - **u_category** - will be one of the following:
+ - **IT**
+ - **ICS**
+ - **IoT**
+ - **Network**
+- **u_operating_system** - the device operating system.
+- **u_protocol_objects** - array of:
+ - **u_protocol** - protocol the device uses.
+- **u_purdue_layer** - the purdue layer that was manually set by the user.
+- **u_sensor_ids** - array of:
+ - **u_sensor_id** - the ID of the sensor that saw the device.
+- **u_device_urls** - array of:
+ - **u_device_url** the URL to view the device in the sensor.
+- **u_firmwares** - array of:
+ - **u_address**
+ - **u_module_address**
+ - **u_serial**
+ - **u_model**
+ - **u_version**
+ - **u_additional_data**
+
+### Request deleted devices - /external/v3/integration/deleteddevices/{timestamp}
+
+This API returns data about deleted devices after the given timestamp.
+
+#### Method
+
+- **GET**
+
+#### Path parameters
+
+- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned.
#### Response -- Type: JSON-- Structure:
- - Array of
- - ΓÇ£**u_id**ΓÇ¥ - the ID of the deleted device.
+- **JSON**
+
+#### Response structure:
+
+Array of:
+- **u_id** - the ID of the deleted device.
+
+### Request sensor data - external/v3/integration/sensors
-### sensors
+This API returns data about the sensor.
-#### Request
+#### Method
-- Path: ΓÇ£/sensorsΓÇ¥-- Method type: GET
+- **GET**
#### Response -- Type: JSON-- Structure:
- - Array of
- - ΓÇ£**u_id**ΓÇ¥ - internal sensor ID, to be used in the devices API.
- - ΓÇ£**u_name**ΓÇ¥ - the name of the appliance.
- - ΓÇ£**u_connection_state**ΓÇ¥ - connectivity with the CM state. One of the following:
- - ΓÇ£**SYNCED**ΓÇ¥ - Connection is successful.
- - ΓÇ£**OUT_OF_SYNC**ΓÇ¥ - Management console cannot process data received from Sensor.
- - ΓÇ£**TIME_DIFF_OFFSET**ΓÇ¥ - Time drift detected. management console has been disconnected from Sensor.
- - ΓÇ£**DISCONNECTED**ΓÇ¥ - Sensor not communicating with management console. Check network connectivity.
- - ΓÇ£**u_interface_address**ΓÇ¥ - the network address of the appliance.
- - ΓÇ£**u_version**ΓÇ¥ - string representation of the sensorΓÇÖs version.
- - ΓÇ£**u_alert_count**ΓÇ¥ - number of alerts found by the sensor.
- - ΓÇ£**u_device_count**ΓÇ¥ - number of devices discovered by the sensor.
- - ΓÇ£**u_unhandled_alert_count**ΓÇ¥ - number of unhandled alerts in the sensor.
- - ΓÇ£**u_is_activated**ΓÇ¥ - is the alert activated.
- - ΓÇ£**u_data_intelligence_version**ΓÇ¥ - string representation of the data intelligence installed in the sensor.
- - ΓÇ£**u_remote_upgrade_stage**ΓÇ¥ - the state of the remote upgrade. One of the following:
- - "**UPLOADING**"
- - "**PREPARE_TO_INSTALL**"
- - "**STOPPING_PROCESSES**"
- - "**BACKING_UP_DATA**"
- - "**TAKING_SNAPSHOT**"
- - "**UPDATING_CONFIGURATION**"
- - "**UPDATING_DEPENDENCIES**"
- - "**UPDATING_LIBRARIES**"
- - "**PATCHING_DATABASES**"
- - "**STARTING_PROCESSES**"
- - "**VALIDATING_SYSTEM_SANITY**"
- - "**VALIDATION_SUCCEEDED_REBOOTING**"
- - "**SUCCESS**"
- - "**FAILURE**"
- - "**UPGRADE_STARTED**"
- - "**STARTING_INSTALLATION**"
- - "**INSTALLING_OPERATING_SYSTEM**"
- - ΓÇ£**u_uid**ΓÇ¥ - globally unique identifier of the sensor
- - "**u_is_in_learning_mode**" - Boolean indication as to whether the sensor is in Learn mode or not
-
-### devicecves
-
-#### Request
--- Path: ΓÇ£/devicecves/{timestamp}ΓÇ¥-- Method type: GET-- Path parameters:
- - ΓÇ£**timestamp**ΓÇ¥ ΓÇô the time from which updates are required, only later updates will be returned.
-- Query parameters:
- - ΓÇ£**page**ΓÇ¥ - Defines the page number, from the result set (first page is 0, default value is 0)
- - ΓÇ£**size**ΓÇ¥ - Defines the page size (default value is 50)
- - ΓÇ£**sensorId**ΓÇ¥ - Shows results from a specific sensor, as defined by the given sensor ID.
- - ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**.
- - ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**
+- **JSON**
+
+#### Response structure
+
+Array of:
+
+- **u_id** - internal sensor ID, to be used in the devices API.
+- **u_name** - the name of the appliance.
+- **u_connection_state** - connectivity with the CM state. One of the following:
+ - **SYNCED** - connection is successful.
+ - **OUT_OF_SYNC** - management console cannot process data received from the sensor.
+ - **TIME_DIFF_OFFSET** - time drift detected. management console has been disconnected from the sensor.
+ - **DISCONNECTED** - sensor not communicating with management console. Check network connectivity.
+- **u_interface_address** - the network address of the appliance.
+- **u_version** - string representation of the sensorΓÇÖs version.
+- **u_alert_count** - number of alerts found by the sensor.
+- **u_device_count** - number of devices discovered by the sensor.
+- **u_unhandled_alert_count** - number of unhandled alerts in the sensor.
+- **u_is_activated** - is the alert activated.
+- **u_data_intelligence_version** - string representation of the data intelligence installed in the sensor.
+- **u_remote_upgrade_stage** - the state of the remote upgrade. Will be one of the following:
+ - **UPLOADING**
+ - **PREPARE_TO_INSTALL**
+ - **STOPPING_PROCESSES**
+ - **BACKING_UP_DATA**
+ - **TAKING_SNAPSHOT**
+ - **UPDATING_CONFIGURATION**
+ - **UPDATING_DEPENDENCIES**
+ - **UPDATING_LIBRARIES**
+ - **PATCHING_DATABASES**
+ - **STARTING_PROCESSES**
+ - **VALIDATING_SYSTEM_SANITY**
+ - **VALIDATION_SUCCEEDED_REBOOTING**
+ - **SUCCESS**
+ - **FAILURE**
+ - **UPGRADE_STARTED**
+ - **STARTING_INSTALLATION**
+ - **INSTALLING_OPERATING_SYSTEM**
+- **u_uid** - globally unique identifier of the sensor.
+- **u_is_in_learning_mode** - boolean indication as to whether the sensor is in Learn mode or not.
+
+### Request all device CVEs - /external/v3/integration/devicecves/{timestamp}
+
+This API returns data about device CVEs after the given timestamp.
+
+#### Method
+
+- **GET**
+
+#### Path parameters
+
+- **timestamp** ΓÇô the time from which updates are required, only later updates will be returned.
+
+#### Query parameters
+
+- **page** - defines the page number, from the result set (first page is 0, default value is 0).
+- **size** - defines the page size (default value is 50).
+- **sensorId** - shows results from a specific sensor, as defined by the given sensor ID.
+- **score** - determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. (default value is 0).
+- **deviceIds** - a comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**
#### Response -- Type: JSON-- Structure:
- - ΓÇ£**u_count**ΓÇ¥ - amount of object in the full result sets, including all pages.
- - ΓÇ£**u_id**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_name**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_ip_address_objects**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_mac_address_objects**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_last_activity**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_last_update**ΓÇ¥ - the same as in the specific device API.
- - ΓÇ£**u_cves**ΓÇ¥ - an array of CVEs:
- - ΓÇ£**u_ip_address**ΓÇ¥ - the IP address of the specific interface with the specific firmware on which the CVE was detected.
- - ΓÇ£**u_cve_id**ΓÇ¥- the ID of the CVE
- - ΓÇ£**u_score**ΓÇ¥- the risk score of the CVE
- - ΓÇ£**u_attack_vector**ΓÇ¥ - one of the following:
- - "**ADJACENT_NETWORK**"
- - "**LOCAL**"
- - "**NETWORK**"
- - ΓÇ£**u_description**ΓÇ¥ - description about the CVE.
+- **JSON**
+
+#### Response structure
+
+- **u_count** - amount of objects in the full result sets, including all pages.
+- **u_id** - the same as in the specific device API.
+- **u_name** - the same as in the specific device API.
+- **u_ip_address_objects** - the same as in the specific device API.
+- **u_mac_address_objects** - the same as in the specific device API.
+- **u_last_activity** - the same as in the specific device API.
+- **u_last_update** - the same as in the specific device API.
+- **u_cves** - an array of CVEs:
+ - **u_ip_address** - the IP address of the specific interface with the specific firmware on which the CVE was detected.
+ - **u_cve_id**- the ID of the CVE.
+ - **u_score**- the risk score of the CVE.
+ - **u_attack_vector** - one of the following:
+ - **ADJACENT_NETWORK**
+ - **LOCAL**
+ - **NETWORK**
+ - **u_description** - description of the CVE.
## Next steps
defender-for-iot Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes-archive.md
Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Term
The following feature enhancements are available with version 10.5.3 of Microsoft Defender for IoT. -- The on-premises management console, has a new [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview).
+- The on-premises management console, has new [integration APIs](references-work-with-defender-for-iot-apis.md#version-3).
- Enhancements have been made to the network traffic analysis of multiple OT and ICS protocol dissectors.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
Now you can add any of the following parameters to your query to fine tune your
- ΓÇ£**score**ΓÇ¥ - Determines a minimum CVE score to be retrieved. All results will have a CVE score equal to or higher than the given value. Default = **0**. - ΓÇ£**deviceIds**ΓÇ¥ - A comma-separated list of device IDs from which you want to show results. For example: **1232,34,2,456**
-For more information, see [ServiceNow Integration API - ΓÇ£/external/v3/integration/ (Preview)](references-work-with-defender-for-iot-apis.md#servicenow-integration-apiexternalv3integration-preview).
->>>>>>> 3e9c47c4758cdb6f63a6873219cab9498206cb2a
+For more information, see [Management console APIs - Version 3](references-work-with-defender-for-iot-apis.md#version-3).
### OT appliance hardware profile updates
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
Defender for IoT supports the entire breadth of IoT devices in your environment,
In this tutorial, you learn about: > [!div class="checklist"]
-> * Integrating with Microsoft Defender for Endpoint
+> * Integration with Microsoft Defender for Endpoint
> * Prerequisites for Enterprise IoT network monitoring with Defender for IoT > * How to prepare a physical appliance or VM as a network sensor > * How to onboard an Enterprise IoT sensor and install software > * How to view detected Enterprise IoT devices in the Azure portal > * How to view devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint
-> [!IMPORTANT]
-> The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Microsoft Defender for Endpoint integration
-Integrate with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices.
+Defender for IoT integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to extend your security analytics capabilities, providing complete coverage across your Enterprise IoT devices. Defender for Endpoint analytics features include alerts, vulnerabilities, and recommendations for your enterprise devices.
-After you've onboarded a plan for Enterprise IoT and set up your Enterprise IoT network sensor, your device data integrates automatically with Microsoft Defender for Endpoint.
+Microsoft 365 P2 customers can onboard a plan for Enterprise IoT through the Microsoft Defender for Endpoint portal. After you've onboarded a plan for Enterprise IoT, view discovered IoT devices and related alerts, vulnerabilities, and recommendations in Defender for Endpoint.
-- Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals.-- In Defender for Endpoint, view discovered IoT devices and related alerts, vulnerabilities, and recommendations.
+Microsoft 365 P2 customers can also install the Enterprise IoT network sensor (currently in **Public Preview**) to gain more visibility into additional IoT segments of the corporate network that were not previously covered by Defender for Endpoint. Deploying a network sensor is not a prerequisite for onboarding Enterprise IoT.
For more information, see [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+> [!IMPORTANT]
+> The **Enterprise IoT network sensor** is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Prerequisites Before starting this tutorial, make sure that you have the following prerequisites.
Alternately, remove your sensor manually from the CLI. For more information, see
For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal). - ## Next steps Continue viewing device data in both the Azure portal and Defender for Endpoint, depending on your organization's needs. - - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Manage your IoT devices with the device inventory for organizations](how-to-manage-device-inventory-for-organizations.md)
Continue viewing device data in both the Azure portal and Defender for Endpoint,
In Defender for Endpoint, also view alerts data, recommendations and vulnerabilities related to your network traffic.
-For more information in Defender for Endpoint documentation, see:
+For more information in the Defender for Endpoint documentation, see:
- [Onboard with Microsoft Defender for IoT in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) - [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview)
digital-twins Reference Query Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-functions.md
The following query returns all digital twins whose IDs end in `-small`. The str
## IS_BOOL
-A type checking and casting function for determining whether an expression has a Boolean value.
+A type checking function for determining whether an property has a Boolean value.
This function is often combined with other predicates if the program processing the query results requires a boolean value, and you want to filter out cases where the property is not a boolean.
This function is often combined with other predicates if the program processing
### Arguments
-`<expression>`, an expression to check whether it is a Boolean.
+`<property>`, an property to check whether it is a Boolean.
### Returns
-A Boolean value indicating if the type of the specified expression is a Boolean.
+A Boolean value indicating if the type of the specified property is a Boolean.
### Example
The following query builds on the above example to select the digital twins that
## IS_DEFINED
-A type checking and casting function to check whether a property is defined.
-
-This is only supported when the property value is a primitive type. Primitive types include string, Boolean, numeric, or `null`. `DateTime`, object types, and arrays are not supported.
+A type checking function to determine whether a property is defined.
### Syntax
This is only supported when the property value is a primitive type. Primitive ty
### Arguments
-`<property>`, a property to determine whether it is defined. The property must be of a primitive type.
+`<property>`, a property to determine whether it is defined.
### Returns
The following query returns all digital twins who have a defined `Location` prop
## IS_NULL
-A type checking and casting function for determining whether an expression's value is `null`.
+A type checking function for determining whether an property's value is `null`.
### Syntax
A type checking and casting function for determining whether an expression's val
### Arguments
-`<expression>`, an expression to check whether it is null.
+`<property>`, a property to check whether it is null.
### Returns
-A Boolean value indicating if the type of the specified expression is `null`.
+A Boolean value indicating if the type of the specified property is `null`.
### Example
The following query returns twins who do not have a null value for Temperature.
## IS_NUMBER
-A type checking and casting function for determining whether an expression has a number value.
+A type checking function for determining whether a property has a number value.
This function is often combined with other predicates if the program processing the query results requires a number value, and you want to filter out cases where the property is not a number.
This function is often combined with other predicates if the program processing
### Arguments
-`<expression>`, an expression to check whether it is a number.
+`<property>`, a property to check whether it is a number.
### Returns
-A Boolean value indicating if the type of the specified expression is a number.
+A Boolean value indicating if the type of the specified property is a number.
### Example
The following query selects the digital twins that have a numeric `Capacity` pro
## IS_OBJECT
-A type checking and casting function for determining whether an expression's value is of a JSON object type.
+A type checking function for determining whether a property's value is of a JSON object type.
This function is often combined with other predicates if the program processing the query results requires a JSON object, and you want to filter out cases where the value is not a JSON object.
This function is often combined with other predicates if the program processing
### Arguments
-`<expression>`, an expression to check whether it is of an object type.
+`<property>`, a property to check whether it is of an object type.
### Returns
-A Boolean value indicating if the type of the specified expression is a JSON object.
+A Boolean value indicating if the type of the specified property is a JSON object.
### Example
The following query selects all of the digital twins where this is an object cal
## IS_OF_MODEL
-A type checking and casting function to determine whether a twin is of a particular model type. Includes models that inherit from the specified model.
+A type checking and function to determine whether a twin is of a particular model type. Includes models that inherit from the specified model.
### Syntax
The following query returns twins from the DT collection that are exactly of the
## IS_PRIMITIVE
-A type checking and casting function for determining whether an expression's value is of a primitive type (string, Boolean, numeric, or `null`).
+A type checking function for determining whether a property's value is of a primitive type (string, Boolean, numeric, or `null`).
This function is often combined with other predicates if the program processing the query results requires a primitive-typed value, and you want to filter out cases where the property is not primitive.
This function is often combined with other predicates if the program processing
### Arguments
-`<expression>`, an expression to check whether it is of a primitive type.
+`<property>`, a property to check whether it is of a primitive type.
### Returns
-A Boolean value indicating if the type of the specified expression is one of the primitive types (string, Boolean, numeric, or `null`).
+A Boolean value indicating if the type of the specified property is one of the primitive types (string, Boolean, numeric, or `null`).
### Example
The following query returns the `area` property of the Factory with the ID of 'A
## IS_STRING
-A type checking and casting function for determining whether an expression has a string value.
+A type checking function for determining whether a property has a string value.
This function is often combined with other predicates if the program processing the query results requires a string value, and you want to filter out cases where the property is not a string.
This function is often combined with other predicates if the program processing
### Arguments
-`<expression>`, an expression to check whether it is a string.
+`<property>`, a property to check whether it is a string.
### Returns
dns Private Dns Privatednszone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md
Title: What is an Azure DNS private zone description: Overview of a private DNS zone -+ Previously updated : 04/09/2021- Last updated : 08/15/2022+ # What is a private Azure DNS zone
To understand how many private DNS zones you can create in a subscription and ho
## Restrictions
-* Single-labeled private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels.
+* Single-labeled private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example, contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels.
* You can't create zone delegations (NS records) in a private DNS zone. If you intend to use a child domain, you can directly create the domain as a private DNS zone. Then you can link it to the virtual network without setting up a nameserver delegation from the parent zone.
+* Starting the week of August 28th, 2022, specific reserved zone names will be blocked from creation to prevent disruption of services. The following zone names are blocked:
+
+ | Public | Azure Government | Azure China |
+ | | | |
+ |azure.com | azure.us | azure.cn
+ |microsoft.com | microsoft.us | microsoft.cn
+ |trafficmanager.net | usgovtrafficmanager.net | trafficmanager.cn
+ |cloudapp.net | usgovcloudapp.net | chinacloudapp.cn
+ |azclient.ms | azclient.us | azclient.cn
+ |windows.net| usgovcloudapi.net | chinacloudapi.cn
+ |msidentity.com | msidentity.us | msidentity.cn
+ |core.windows.net | core.usgovcloudapi.net | core.chinacloudapi.cn
## Next steps
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
event-grid Publish Iot Hub Events To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md
Title: Tutorial - Use IoT Hub events to trigger Azure Logic Apps description: This tutorial shows how to use the event routing service of Azure Event Grid, create automated processes to perform Azure Logic Apps actions based on IoT Hub events. -+ Last updated 09/14/2020-+
Next, create a logic app and add an HTTP event grid trigger that processes reque
1. In the [Azure portal](https://portal.azure.com), select **Create a resource**, then type "logic app" in the search box and select return. Select **Logic App** from the results.
- ![Create logic app](./media/publish-iot-hub-events-to-logic-apps/select-logic-app.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/select-logic-app.png" alt-text="Screenshot of how to select the logic app from a list of resources." lightbox="./media/publish-iot-hub-events-to-logic-apps/select-logic-app.png":::
1. On the next screen, select **Create**.
-1. Give your logic app a name that's unique in your subscription, then select the same subscription, resource group, and location as your IoT hub.
+1. Give your logic app a unique name in your subscription, then select the same subscription, resource group, and location as your IoT hub. Choose the **Consumption** plan type.
- ![Fields for create logic app](./media/publish-iot-hub-events-to-logic-apps/create-logic-app-fields.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/create-logic-app-fields.png" alt-text="Screenshot of how to configure your logic app." lightbox="./media/publish-iot-hub-events-to-logic-apps/create-logic-app-fields.png":::
1. Select **Review + create**.
Next, create a logic app and add an HTTP event grid trigger that processes reque
1. In the Logic Apps Designer, page down to see **Templates**. Choose **Blank Logic App** so that you can build your logic app from scratch.
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/logic-app-designer-template.png" alt-text="Screenshot of the Logic App Designer templates." lightbox="./media/publish-iot-hub-events-to-logic-apps/logic-app-designer-template.png":::
+ ### Select a trigger A trigger is a specific event that starts your logic app. For this tutorial, the trigger that sets off the workflow is receiving a request over HTTP.
A trigger is a specific event that starts your logic app. For this tutorial, the
![Use sample payload](./media/publish-iot-hub-events-to-logic-apps/sample-payload.png)
+1. Copy the `json` below and replace the placeholder values `<>` with your own.
+ 1. Paste the *Device connected event schema* JSON into the text box, then select **Done**: ```json [{ "id": "f6bbf8f4-d365-520d-a878-17bf7238abd8",
- "topic": "/SUBSCRIPTIONS/<subscription ID>/RESOURCEGROUPS/<resource group name>/PROVIDERS/MICROSOFT.DEVICES/IOTHUBS/<hub name>",
+ "topic": "/SUBSCRIPTIONS/<azure subscription ID>/RESOURCEGROUPS/<resource group name>/PROVIDERS/MICROSOFT.DEVICES/IOTHUBS/<hub name>",
"subject": "devices/LogicAppTestDevice", "eventType": "Microsoft.Devices.DeviceConnected", "eventTime": "2018-06-02T19:17:44.4383997Z",
A trigger is a specific event that starts your logic app. For this tutorial, the
"sequenceNumber": "000000000000000001D4132452F67CE200000002000000000000000000000001" },
- "hubName": "egtesthub1",
+ "hubName": "<hub name>",
"deviceId": "LogicAppTestDevice", "moduleId" : "DeviceModuleID" },
Actions are any steps that occur after the trigger starts the logic app workflow
Your email template may look like this example:
- ![Fill out email information](./media/publish-iot-hub-events-to-logic-apps/email-content.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/email-content.png" alt-text="Screenshot of how to create an event email in the template." lightbox="./media/publish-iot-hub-events-to-logic-apps/email-content.png":::
1. Select **Save** in the Logic Apps Designer.
In this section, you configure your IoT Hub to publish events as they occur.
When you're done, the pane should look like the following example:
- ![Sample event subscription form](./media/publish-iot-hub-events-to-logic-apps/subscription-form.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/subscription-form.png" alt-text="Screenshot of your 'Create Event Subscription' page in the Azure portal." lightbox="./media/publish-iot-hub-events-to-logic-apps/subscription-form.png":::
1. Select **Create**.
Test your logic app by quickly simulating a device connection using the Azure CL
az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName} ```
+ This could take a minute. You'll see a `json` printout once it's created.
+ 1. Run the following command to simulate connecting your device to IoT Hub and sending telemetry: ```azurecli
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
na Previously updated : 03/22/2022 Last updated : 08/15/2022 #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources.-+ Last updated 08/05/2022 -+ # Get compliance data of Azure resources
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/08/2022 Last updated : 08/16/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/08/2022 Last updated : 08/16/2022
hdinsight Enterprise Security Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enterprise-security-package.md
Title: Enterprise Security Package for Azure HDInsight
description: Learn the Enterprise Security Package components and versions in Azure HDInsight. Previously updated : 05/08/2020 Last updated : 08/16/2022 # Enterprise Security Package for Azure HDInsight
hdinsight Apache Hadoop Use Hive Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-beeline.md
This example is based on using the Beeline client from [an SSH connection](../hd
``` > [!NOTE] > Refer to "To HDInsight Enterprise Security Package (ESP) cluster using Kerberos" part in [Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local](connect-install-beeline.md#to-hdinsight-enterprise-security-package-esp-cluster-using-kerberos) if you are using an Enterprise Security Package (ESP) enabled cluster
- >
- > Dropping an external table does **not** delete the data, only the table definition.
-
+
3. Beeline commands begin with a `!` character, for example `!help` displays help. However the `!` can be omitted for some commands. For example, `help` also works. There's `!sql`, which is used to execute HiveQL statements. However, HiveQL is so commonly used that you can omit the preceding `!sql`. The following two statements are equivalent:
hdinsight Hdinsight Administer Use Command Line https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-administer-use-command-line.md
description: Learn how to use the Azure CLI to manage Azure HDInsight clusters.
Previously updated : 02/26/2020 Last updated : 06/16/2022 # Manage Azure HDInsight clusters using Azure CLI
hdinsight Hdinsight Create Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-create-virtual-network.md
description: Learn how to create an Azure Virtual Network to connect HDInsight t
Previously updated : 05/12/2021 Last updated : 08/16/2022 # Create virtual networks for Azure HDInsight clusters
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
description: Learn how to create HDInsight clusters with your own custom Apache
Previously updated : 01/12/2021 Last updated : 08/16/2022 # Set up HDInsight clusters with a custom Ambari DB
hdinsight Hdinsight Multiple Clusters Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-multiple-clusters-data-lake-store.md
description: Learn how to use more than one HDInsight cluster with a single Data
Previously updated : 12/18/2019 Last updated : 08/16/2022 # Use multiple HDInsight clusters with an Azure Data Lake Storage account
Set read-execute permissions for **others** through the hierarchy, for example,
## See also - [Quickstart: Set up clusters in HDInsight](./hdinsight-hadoop-provision-linux-clusters.md)-- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)
+- [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](hdinsight-hadoop-use-data-lake-storage-gen2.md)
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 08/03/2022 Last updated : 08/12/2022 # Configure export settings and set up a storage account
-FHIR service supports the $export command that allows you to export the data out of the FHIR service account to a storage account.
+The FHIR service supports the `$export` operation [specified by HL7](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
-The three steps below are used in configuring export data in the FHIR service:
+There are three steps in setting up the `$export` operation for the FHIR service:
-- Enable managed identity for the FHIR service.-- Create an Azure storage account or use an existing storage account, and then grant permissions to the FHIR service to access them.-- Select the storage account in the FHIR service as the destination.
+- Enable a managed identity for the FHIR service.
+- Configure a new or existing Azure Data Lake Storage Gen2 (ADLS Gen2) account and give permission for the FHIR service to access the account.
+- Set the ADLS Gen2 account as the export destination for the FHIR service.
-## Enable managed identity on the FHIR service
+## Enable managed identity for the FHIR service
-The first step in configuring the FHIR service for export is to enable system wide managed identity on the service, which will be used to grant the service to access the storage account. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+The first step in configuring your environment for FHIR data export is to enable a system-wide managed identity for the FHIR service. This managed identity is used to authenticate the FHIR service to allow access to the ADLS Gen2 account during an `$export` operation. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-In this step, browse to your FHIR service in the Azure portal, and select the **Identity** blade. Select the **Status** option to **On** , and then select **Save**. **Yes** and **No** buttons will display. Select **Yes** to enable the managed identity for FHIR service. Once the system identity has been enabled, you'll see a system assigned GUID value.
+In this step, browse to your FHIR service in the Azure portal and select the **Identity** blade. Set the **Status** option to **On**, and then click **Save**. When the **Yes** and **No** buttons display, select **Yes** to enable the managed identity for the FHIR service. Once the system identity has been enabled, you'll see an **Object (principal) ID** value for your FHIR service.
[![Enable Managed Identity](media/export-data/fhir-mi-enabled.png)](media/export-data/fhir-mi-enabled.png#lightbox)
-## Assign permissions to the FHIR service to access the storage account
+## Give permission in the storage account for FHIR service access
-1. Select **Access control (IAM)**.
+1. Go to your ADLS Gen2 storage account in the Azure portal.
-1. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator to assign you permission to perform this task.
+2. Select **Access control (IAM)**.
+
+3. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator for help with this step.
:::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot that shows Access control (IAM) page with Add role assignment menu open.":::
-1. On the **Role** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role.
+4. On the **Role** tab, select the [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) role.
[![Screen shot showing user interface of Add role assignment page.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox)
-1. On the **Members** tab, select **Managed identity**, and then select **Select members**.
+5. On the **Members** tab, select **Managed identity**, and then click **Select members**.
-1. Select your Azure subscription.
+6. Select your Azure subscription.
-1. Select **System-assigned managed identity**, and then select the FHIR service.
+7. Select **System-assigned managed identity**, and then select the managed identity that you enabled earlier for your FHIR service.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+8. On the **Review + assign** tab, click **Review + assign** to assign the **Storage Blob Data Contributor** role to your FHIR service.
For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
-Now you're ready to select the storage account in the FHIR service as a default storage account for export.
+Now you're ready to configure the FHIR service with the ADLS Gen2 account as the default storage account for export.
-## Specify the export storage account for FHIR service
+## Specify the storage account for FHIR service export
-The final step is to assign the Azure storage account that the FHIR service will use to export the data to.
+The final step is to specify the ADLS Gen2 account that the FHIR service will use when exporting data.
> [!NOTE]
-> If you haven't assigned storage access permissions to the FHIR service, the export operations ($export) will fail.
+> In the storage account, if you haven't assigned the **Storage Blob Data Contributor** role to the FHIR service, the `$export` operation will fail.
+
+1. Go to your FHIR service settings.
+
+2. Select the **Export** blade.
-To do this, select the **Export** blade in FHIR service and select the storage account. To search for the storage account, enter its name in the text field. You can also search for your storage account by using the available filters **Name**, **Resource group**, or **Region**.
+3. Select the name of the storage account from the list. If you need to search for your storage account, use the **Name**, **Resource group**, or **Region** filters.
[![Screen shot showing user interface of FHIR Export Storage.](media/export-data/fhir-export-storage.png)](media/export-data/fhir-export-storage.png#lightbox)
-After you've completed this final step, you're ready to export the data using $export command.
+After you've completed this final configuration step, you're ready to export data from the FHIR service. See [How to export FHIR data](./export-data.md) for details on performing `$export` operations with the FHIR service.
> [!Note]
-> Only storage accounts in the same subscription as that for FHIR service are allowed to be registered as the destination for $export operations.
+> Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations.
-## Use Azure storage accounts behind firewalls
+## Securing the FHIR service `$export` operation
-FHIR service supports a secure export operation. Choose one of the two options below:
+For securely exporting from the FHIR service to an ADLS Gen2 account, there are two main options:
-* Allowing FHIR service as a Microsoft Trusted Service to access the Azure storage account.
+* Allowing the FHIR service to access the storage account as a Microsoft Trusted Service.
-* Allowing specific IP addresses associated with FHIR service to access the Azure storage account.
-This option provides two different configurations depending on whether the storage account is in the same location as, or is in a different location from that of the FHIR service.
+* Allowing specific IP addresses associated with the FHIR service to access the storage account.
+This option permits two different configurations depending on whether or not the storage account is in the same Azure region as the FHIR service.
### Allowing FHIR service as a Microsoft Trusted Service
-Select a storage account from the Azure portal, and then select the **Networking** blade. Select **Selected networks** under the **Firewalls and virtual networks** tab.
+Go to your ADLS Gen2 account in the Azure portal and select the **Networking** blade. Select **Enabled from selected virtual networks and IP addresses** under the **Firewalls and virtual networks** tab.
:::image type="content" source="media/export-data/storage-networking-1.png" alt-text="Screenshot of Azure Storage Networking Settings." lightbox="media/export-data/storage-networking-1.png":::
-Select **Microsoft.HealthcareApis/workspaces** from the **Resource type** dropdown list and your workspace from the **Instance name** dropdown list.
+Select **Microsoft.HealthcareApis/workspaces** from the **Resource type** dropdown list and then select your workspace from the **Instance name** dropdown list.
-Under the **Exceptions** section, select the box **Allow trusted Microsoft services to access this storage account** and save the setting.
+Under the **Exceptions** section, select the box **Allow Azure services on the trusted services list to access this storage account**. Make sure to click **Save** to retain the settings.
:::image type="content" source="media/export-data/exceptions.png" alt-text="Allow trusted Microsoft services to access this storage account.":::
-Next, specify the FHIR service instance in the selected workspace instance for the storage account using the PowerShell command.
+Next, run the following PowerShell command to install the `Az.Storage` PowerShell module in your local environment. This will allow you to configure your Azure storage account(s) using PowerShell.
-```
+```PowerShell
+Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force
+```
+
+Now, use the PowerShell command below to set the selected FHIR service instance as a trusted resource for the storage account. Make sure that all listed parameters are defined in your PowerShell environment.
+
+Note that you'll need to run the `Add-AzStorageAccountNetworkRule` command as an administrator in your local environment. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md).
+
+```PowerShell
$subscription="xxx" $tenantId = "xxx" $resourceGroupName = "xxx" $storageaccountName = "xxx" $workspacename="xxx" $fhirname="xxx"
-$resourceId = "/subscriptions/$subscription/resourceGroups/$resourcegroup/providers/Microsoft.HealthcareApis/workspaces/$workspacename/fhirservices/$fhirname"
+$resourceId = "/subscriptions/$subscription/resourceGroups/$resourceGroupName/providers/Microsoft.HealthcareApis/workspaces/$workspacename/fhirservices/$fhirname"
Add-AzStorageAccountNetworkRule -ResourceGroupName $resourceGroupName -Name $storageaccountName -TenantId $tenantId -ResourceId $resourceId ```
-You can see that the networking setting for the storage account shows **two selected** in the **Instance name** dropdown list. One is linked to the workspace instance and the second is linked to the FHIR service instance.
+After running this command, in the **Firewall** section under **Resource instances** you will see **2 selected** in the **Instance name** dropdown list. These are the names of the workspace instance and FHIR service instance that you just registered as Microsoft Trusted Resources.
:::image type="content" source="media/export-data/storage-networking-2.png" alt-text="Screenshot of Azure Storage Networking Settings with resource type and instance names." lightbox="media/export-data/storage-networking-2.png":::
-Note that you'll need to install "Add-AzStorageAccountNetworkRule" using an administrator account. For more information, see [Configure Azure Storage firewalls and virtual networks](../../storage/common/storage-network-security.md)
-
-`
-Install-Module Az.Storage -Repository PsGallery -AllowClobber -Force
-`
-
-You're now ready to export FHIR data to the storage account securely. Note that the storage account is on selected networks and isn't publicly accessible. To access the files, you can either enable and use private endpoints for the storage account, or enable all networks for the storage account to access the data there if possible.
-
-> [!IMPORTANT]
-> The user interface will be updated later to allow you to select the Resource type for FHIR service and a specific service instance.
+You're now ready to securely export FHIR data to the storage account. Note that the storage account is on selected networks and isn't publicly accessible. To securely access the files, you can enable private endpoints for the storage account.
-### Allowing specific IP addresses for the Azure storage account in a different region
+### Allowing specific IP addresses from other Azure regions to access the Azure storage account
-Select **Networking** of the Azure storage account from the
-portal.
+In the Azure portal, go to the ADLS Gen2 account and select the **Networking** blade.
-Select **Selected networks**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to
-allow access from the internet or your on-premises networks. You can
-find the IP address in the table below for the Azure region where the
-FHIR service is provisioned.
+Select **Enabled from selected virtual networks and IP addresses**. Under the Firewall section, specify the IP address in the **Address range** box. Add IP ranges to allow access from the internet or your on-premises networks. You can find the IP address in the table below for the Azure region where the FHIR service is provisioned.
|**Azure Region** |**Public IP Address** | |:-|:-|
FHIR service is provisioned.
> [!NOTE] > The above steps are similar to the configuration steps described in the document **Converting your data to FHIR**. For more information, see [Configure ACR firewall](./convert-data.md#configure-acr-firewall).
-### Allowing specific IP addresses for the Azure storage account in the same region
+### Allowing specific IP addresses to access the Azure storage account in the same region
-The configuration process is the same as above except a specific IP
-address range in Classless Inter-Domain Routing (CIDR) format is used instead, 100.64.0.0/10. The reason why the IP address range, which includes 100.64.0.0 ΓÇô 100.127.255.255, must be specified is because the actual IP address used by the service varies, but will be within the range, for each $export request.
+The configuration process for IP addresses in the same region is just like above except a specific IP address range in Classless Inter-Domain Routing (CIDR) format is used instead (i.e., 100.64.0.0/10). The reason why the IP address range (100.64.0.0 ΓÇô 100.127.255.255) must be specified is because an IP address for the FHIR service will be allocated each time an `$export` request is made.
> [!Note]
-> It is possible that a private IP address within the range of 10.0.2.0/24 may be used instead. In that case, the $export operation will not succeed. You can retry the $export request, but there is no guarantee that an IP address within the range of 100.64.0.0/10 will be used next time. That's the known networking behavior by design. The alternative is to configure the storage account in a different region.
+> It is possible that a private IP address within the range of 10.0.2.0/24 may be used, but there is no guarantee that the `$export` operation will succeed in such a case. You can retry if the `$export` request fails, but until an IP address within the range of 100.64.0.0/10 is used, the request will not succeed. This network behavior for IP address ranges is by design. The alternative is to configure the storage account in a different region.
## Next steps
-In this article, you learned about the three steps in configuring export settings that allow you to export data out of FHIR service account to a storage account. For more information about the Bulk Export feature that allows data to be exported from the FHIR service, see
+In this article, you learned about the three steps in configuring your environment to allow export of data from your FHIR service to an Azure storage account. For more information about Bulk Export capabilities in the FHIR service, see
>[!div class="nextstepaction"] >[How to export FHIR data](export-data.md)
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
Copy the URL as request URL and do following changes of the JSON as body:
- Set initialImportMode in importConfiguration to **true** - Drop off provisioningState.
-[ ![Screenshot of the importer configuration code example](media/bulk-import/importer-url-and-body.png) ](media/bulk-import/importer-url-and-body.png#lightbox)
+[ ![Screenshot of the importer configuration code example](media/bulk-import/import-url-and-body.png) ](media/bulk-import/import-url-and-body.png#lightbox)
After you've completed this final step, you're ready to import data using $import.
+You can also use the **Deploy to Azure** button below to open custom Resource Manager template that updates the configuration for $import.
+
+ [![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fiotc-device-bridge%2Fmaster%2Fazuredeploy.json)
+ ## Next steps In this article, you've learned the FHIR service supports $import operation and how it allows you to import data into FHIR service account from a storage account. You also learned about the three steps used in configuring import settings in the FHIR service. For more information about converting data to FHIR, exporting settings to set up a storage account, and moving data to Azure Synapse, see
In this article, you've learned the FHIR service supports $import operation and
>[!div class="nextstepaction"] >[Copy data from FHIR service to Azure Synapse Analytics](copy-to-synapse.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
Previously updated : 08/02/2022 Last updated : 08/15/2022
The `$convert-data` custom endpoint in the FHIR service enables converting healt
## Using the `$convert-data` endpoint
-The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. Calling the `$convert-data` endpoint causes the FHIR service to perform a conversion on health data sent in an API request:
+The `$convert-data` operation is integrated into the FHIR service as a RESTful API action. You can call the `$convert-data` endpoint as follows:
`POST {{fhirurl}}/$convert-data`
-The health data is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR.
+The health data for conversion is delivered to the FHIR service in the body of the `$convert-data` request. If the request is successful, the FHIR service will return a FHIR `Bundle` response with the data converted to FHIR.
### Parameters Resource
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Title: Exporting de-identified data for FHIR service
+ Title: Using the FHIR service to export de-identified data
description: This article describes how to set up and use de-identified export Previously updated : 06/06/2022 Last updated : 08/15/2022 # Exporting de-identified data > [!Note]
-> Results when using the de-identified export will vary based on factors such as data inputted, and functions selected by the customer. Microsoft is unable to evaluate the de-identified export outputs or determine the acceptability for customer's use cases and compliance needs. The de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
+> Results when using the FHIR service's de-identified export will vary based on the nature of the data being exported and what de-id functions are in use. Microsoft is unable to evaluate de-identified export outputs or determine the acceptability for customers' use cases and compliance needs. The FHIR service's de-identified export is not guaranteed to meet any specific legal, regulatory, or compliance requirements.
-The $export command can also be used to export de-identified data from the FHIR server. It uses the anonymization engine from [FHIR tools for anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization), and takes anonymization config details in query parameters. You can create your own anonymization config file or use the [sample config file](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#sample-configuration-file) for HIPAA Safe Harbor method as a starting point.
+ The FHIR service is able to de-identify data on export when running an `$export` operation. For de-identified export, the FHIR service uses the anonymization engine from the [FHIR tools for anonymization](https://github.com/microsoft/FHIR-Tools-for-Anonymization) (OSS) project on GitHub. There is a [sample config file](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#sample-configuration-file) to help you get started redacting/transforming FHIR data fields that contain personally identifying information.
## Configuration file
-The anonymization engine comes with a sample configuration file to help meet the requirements of HIPAA Safe Harbor Method. The configuration file is a JSON file with four sections: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`.
+The anonymization engine comes with a sample configuration file to help you get started with [HIPAA Safe Harbor Method](https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/https://docsupdatetracker.net/index.html#safeharborguidance) de-id requirements. The configuration file is a JSON file with four properties: `fhirVersion`, `processingErrors`, `fhirPathRules`, `parameters`.
* `fhirVersion` specifies the FHIR version for the anonymization engine.
-* `processingErrors` specifies what action to take for the processing errors that may arise during the anonymization. You can _raise_ or _keep_ the exceptions based on your needs.
-* `fhirPathRules` specifies which anonymization method is to be used. The rules are executed in the order of appearance in the configuration file.
-* `parameters` sets rules for the anonymization behaviors specified in _fhirPathRules_.
+* `processingErrors` specifies what action to take for any processing errors that may arise during the anonymization. You can _raise_ or _keep_ the exceptions based on your needs.
+* `fhirPathRules` specifies which anonymization method to use. The rules are executed in the order they appear in the configuration file.
+* `parameters` sets additional controls for the anonymization behavior specified in _fhirPathRules_.
-Here's a sample configuration file for R4:
+Here's a sample configuration file for FHIR R4:
```json {
Here's a sample configuration file for R4:
} ```
-For more detailed information on each of these four sections of the configuration file, select [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format).
-## Using $export command for the de-identified data
- `https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>`
+For detailed information on the settings within the configuration file, visit [here](https://github.com/microsoft/Tools-for-Health-Data-Anonymization/blob/master/docs/FHIR-anonymization.md#configuration-file-format).
+
+## Using the `$export` endpoint for de-identifying data
+
+The API call below demonstrates how to form a request for de-id on export from the FHIR service.
+
+```
+GET https://<<FHIR service base URL>>/$export?_container=<<container_name>>&_anonymizationConfig=<<config file name>>&_anonymizationConfigEtag=<<ETag on storage>>
+```
+
+You will need to create a container for the de-identified export in your ADLS Gen2 account and specify the `<<container_name>>` in the API request as shown above. Additionally, you will need to place the JSON config file with the anonymization rules inside the container and specify the `<<config file name>>` in the API request (see above).
+
+> [!Note]
+> It is common practice to name the container `anonymization`. The JSON file within the container is often named `anonymizationConfig.json`.
> [!Note]
-> Right now the FHIR service only supports de-identified export at the system level ($export).
+> Right now the FHIR service only supports de-identified export at the system level (`$export`).
|Query parameter | Example |Optionality| Description| |||--||
-| _\_anonymizationConfig_ |DemoConfig.json|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named **anonymization** within the same Azure storage account that is configured as the export location. |
-| _\_anonymizationConfigEtag_|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property|
+| `anonymizationConfig` |`anonymizationConfig.json`|Required for de-identified export |Name of the configuration file. See the configuration file format [here](https://github.com/microsoft/FHIR-Tools-for-Anonymization#configuration-file-format). This file should be kept inside a container named `anonymization` within the ADLS Gen2 account that is configured as the export location. |
+| `anonymizationConfigEtag`|"0x8D8494A069489EC"|Optional for de-identified export|This is the Etag of the configuration file. You can get the Etag using Azure Storage Explorer from the blob property.|
> [!IMPORTANT]
-> Both raw export as well as de-identified export writes to the same Azure storage account specified as part of export configuration. It is recommended that you use different containers corresponding to different de-identified config and manage user access at the container level.
+> Both the raw export and de-identified export operations write to the same Azure storage account specified in the export configuration for the FHIR service. If you have need for multiple de-identification configurations, it is recommended that you create a different container for each configuration and manage user access at the container level.
## Next steps
-In this article, you've learned how to set up and use de-identified export. For more information about how to export FHIR data, see
+In this article, you've learned how to set up and use the de-identified export feature in the FHIR service. For more information about how to export FHIR data, see
>[!div class="nextstepaction"] >[Export data](export-data.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
The bulk `$export` operation in the FHIR service allows users to export data as described in the [HL7 FHIR Bulk Data Access specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
-Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an ADLS Gen2 storage account. For configuring export settings and creating an ADLS Gen2 storage account, refer to the [Configure settings for export](./configure-export-data.md) page.
+Before attempting to use `$export`, make sure that your FHIR service is configured to connect with an Azure Data Lake Storage Gen2 (ADLS Gen2) account. For configuring export settings and creating an ADLS Gen2 account, refer to the [Configure settings for export](./configure-export-data.md) page.
## Calling the `$export` endpoint
-After setting up the FHIR service to connect with an ADLS Gen2 storage account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`). Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
+After setting up the FHIR service to connect with an ADLS Gen2 account, you can call the `$export` endpoint and the FHIR service will export data into a blob storage container inside the storage account. The example request below exports all resources into a container specified by name (`{{containerName}}`) within the ADLS Gen2 account. Note that the container in the ADLS Gen2 account must be created beforehand if you want to specify the `{{containerName}}` in the request.
``` GET {{fhirurl}}/$export?_container={{containerName}}
For general information about the FHIR `$export` API spec, please see the [HL7 F
**Jobs stuck in a bad state**
-In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 storage account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
+In some situations, there's a potential for a job to be stuck in a bad state while attempting to `$export` data from the FHIR service. This can occur especially if the ADLS Gen2 account permissions haven't been set up correctly. One way to check the status of your `$export` operation is to go to your storage account's **Storage browser** and see if any `.ndjson` files are present in the export container. If the files aren't present and there are no other `$export` jobs running, then there's a possibility the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later you can requeue the `$export` job and try again. Information about canceling an `$export` operation can be found in the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
> [!NOTE] > In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service will stop the operation and move to a new job.
In addition to checking the presence of exported files in your storage account,
### Exporting FHIR data to ADLS Gen2
-Currently the FHIR service supports `$export` to ADLS Gen2 storage accounts, with the following limitations:
+Currently the FHIR service supports `$export` to ADLS Gen2 accounts, with the following limitations:
- ADLS Gen2 provides [hierarchical namespaces](../../storage/blobs/data-lake-storage-namespace.md), yet there isn't a way to target `$export` operations to a specific subdirectory within a container. The FHIR service is only able to specify the destination container for the export (where a new folder for each `$export` operation is created). - Once an `$export` operation is complete and all data has been written inside a folder, the FHIR service doesn't export anything to that folder again since subsequent exports to the same container will be inside a newly created folder.
The FHIR service supports the following query parameters for filtering exported
| `_outputFormat` | Yes | Currently supports three values to align to the FHIR Spec: `application/fhir+ndjson`, `application/ndjson`, or just `ndjson`. All export jobs will return `.ndjson` files and the passed value has no effect on code behavior. | | `_since` | Yes | Allows you to only export resources that have been modified since the time provided. | | `_type` | Yes | Allows you to specify which types of resources will be included. For example, `_type=Patient` would return only patient resources.|
-| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further restrict the results. |
+| `_typeFilter` | Yes | To request finer-grained filtering, you can use `_typeFilter` along with the `_type` parameter. The value of the `_typeFilter` parameter is a comma-separated list of FHIR queries that further limit the results. |
| `_container` | No | Specifies the name of the container in the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder in that container. If the container isn't specified, the data will be exported to a new container with an auto-generated name. | > [!Note]
-> Only storage accounts in the same subscription as that for the FHIR service are allowed to be registered as the destination for `$export` operations.
+> Only storage accounts in the same subscription as the FHIR service are allowed to be registered as the destination for `$export` operations.
## Next steps
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Last updated 06/06/2022
-# Bulk-import FHIR data (Preview)
+# Bulk-import FHIR data
The bulk-import feature enables importing Fast Healthcare Interoperability Resources (FHIR&#174;) data to the FHIR server at high throughput using the $import operation. This feature is suitable for initial data load into the FHIR server.
The bulk-import feature enables importing Fast Healthcare Interoperability Resou
* Conditional references in resources aren't supported. * If multiple resources share the same resource ID, then only one of those resources will be imported at random and an error will be logged corresponding to the remaining resources sharing the ID. * The data to be imported must be in the same Tenant as that of the FHIR service.
-* Maximum number of files to be imported per operation is 1,000.
+* Maximum number of files to be imported per operation is 10,000.
## Using $import operation
Below are some error codes you may encounter and the solutions to help you resol
As illustrated in this article, $import is one way of doing bulk import. Another way is using an open-source solution, called [FHIR Bulk Loader](https://github.com/microsoft/fhir-loader). FHIR-Bulk Loader is an Azure Function App solution that provides the following capabilities for ingesting FHIR data: * Imports FHIR Bundles (compressed and non-compressed) and NDJSON files into a FHIR service
-* High Speed Parallel Event Grid that triggers from storage accounts or other event grid resources
+* High Speed Parallel Event Grid that triggers from storage accounts or other Event Grid resources
* Complete Auditing, Error logging and Retry for throttled transactions ## Next steps
In this article, you've learned about how the Bulk import feature enables import
>[!div class="nextstepaction"] >[Copy data from Azure API for FHIR to Azure Synapse Analytics](copy-to-synapse.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Title: How to use custom functions in the MedTech service - Azure Health Data Services
+ Title: How to use custom functions with the MedTech service device mapping - Azure Health Data Services
description: This article describes how to use custom functions with MedTech service device mapping. Previously updated : 08/05/2022 Last updated : 08/16/2022 # How to use custom functions
-Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes MedTech service-specific custom functions for use with the MedTech service device mapping during the device message normalization process.
+Many functions are available when using **JmesPath** as the expression language. Besides the functions available as part of the JmesPath specification, many more custom functions may also be used. This article describes the MedTech service-specific custom functions for use with the MedTech service [device mapping](how-to-use-device-mappings.md) during the device message [normalization](iot-data-flow.md#normalize) process.
> [!NOTE] >
Many functions are available when using **JmesPath** as the expression language.
>[!TIP] >
-> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service Device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
+> Check out the [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) tool for editing, testing, and troubleshooting the MedTech service device and FHIR destination mappings. Export mappings for uploading to the MedTech service in the Azure portal or use with the [open-source version](https://github.com/microsoft/iomt-fhir) of the MedTech service.
## Function signature
return_type function_name(type $argname)
The signature indicates the valid types for the arguments. If an invalid type is passed in for an argument, an error will occur. > [!NOTE]
+>
> When math-related functions are done, the end result **must** be able to fit within a C# [long](/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#characteristics-of-the-integral-types) value. If the end result in unable to fit within a C# long value, then a mathematical error will occur. ## Exception handling
healthcare-apis Iot Connector Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-machine-learning.md
Previously updated : 03/25/2022 Last updated : 08/16/2022
In this article, we'll explore using the MedTech service and Azure Machine Learn
## MedTech service and Azure Machine Learning Service reference architecture
-MedTech service enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR&#174;) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
+The MedTech service enables IoT devices seamless integration with Fast Healthcare Interoperability Resources (FHIR&#174;) services. This reference architecture is designed to accelerate adoption of Internet of Medical Things (IoMT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure ML Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
The four line colors show the different parts of the data journey.
The four line colors show the different parts of the data journey.
1. Data from IoT device or via device gateway sent to Azure IoT Hub/Azure IoT Edge. 2. Data from Azure IoT Edge sent to Azure IoT Hub. 3. Copy of raw IoT device data sent to a secure storage environment for device administration.
-4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by 1 MedTech service icon.
+4. PHI IoMT payload moves from Azure IoT Hub to the MedTech service. Multiple Azure services are represented by the MedTech service icon.
5. Three parts to number 5:
- a. MedTech service request Patient resource from FHIR service.
- b. FHIR service sends Patient resource back to the MedTech service.
- c. IoT Patient Observation is record in FHIR service.
+ a. The MedTech service requests Patient resource from the FHIR service.
+ b. The FHIR service sends Patient resource back to the MedTech service.
+ c. IoT Patient Observation is record in the FHIR service.
**Machine Learning and AI Data Route ΓÇô Steps 6 through 11**
-6. Normalized ungrouped data stream sent to Azure Function (ML Input).
+6. Normalized ungrouped data stream sent to an Azure Function (ML Input).
7. Azure Function (ML Input) requests Patient resource to merge with IoMT payload. 8. IoMT payload with PHI is sent to an event hub for distribution to Machine Learning compute and storage. 9. PHI IoMT payload is sent to Azure Data Lake Storage Gen 2 for scoring observation over longer time windows.
In this article, you've learned about the MedTech service and Machine Learning s
>[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Connector Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-power-bi.md
Previously updated : 03/25/2021 Last updated : 08/16/2021
In this article, we'll explore using the MedTech service and Microsoft Power Bus
## MedTech service and Power BI reference architecture
-The reference architecture below shows the basic components of using Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
+The reference architecture below shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Medical Things (IoMT) and Fast Healthcare Interoperability Resources (FHIR&#174;) data.
You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams). :::image type="content" source="media/iot-concepts/iot-connector-power-bi.png" alt-text="Screenshot of the MedTech service and Power BI." lightbox="media/iot-concepts/iot-connector-power-bi.png":::
-MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
+The MedTech service can ingest IoT data from most IoT devices or gateways whatever the location, data center, or cloud.
We do encourage the use of Azure IoT services to assist with device/gateway connectivity.
In this article, you've learned about the MedTech service and Power BI integrati
>[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Connector Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-connector-teams.md
Previously updated : 03/25/2022 Last updated : 08/16/2022
In this article, we'll explore using the MedTech service and Microsoft Teams for
## MedTech service and Teams notifications reference architecture
-When combining MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
+When combining the MedTech service, a Fast Healthcare Interoperability Resources (FHIR&#174;) service, and Teams, you can enable multiple care solutions.
-Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, FHIR, and Teams Patient App.
+Below is the MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App.
You can even embed Power BI Dashboards inside the Microsoft Teams client. For more information on embedding Power BI in Microsoft Team visit [here](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
In this article, you've learned about the MedTech service and Teams notification
>[!div class="nextstepaction"] >[MedTech service overview](iot-connector-overview.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-data-flow.md
Previously updated : 07/22/2022 Last updated : 08/16/2022
This article provides an overview of the MedTech service data flow. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
-Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service.
+Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through a FHIR destination mapping, and then saved or persisted on the FHIR service.
This article goes into more depth about each step in the data flow. The next steps are [Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md) by using a device mapping (the normalization step) and a FHIR destination mapping (the transformation step).
This next section of the article describes the stages that IoMT (Internet of Med
Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). The Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed. > [!NOTE]
+>
> JSON is the only supported format at this time for device data. ## Normalize
Group is the next stage where the normalized messages available from the previou
Device identity and measurement type grouping enable use of [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData) measurement type. This type provides a concise way to represent a time-based series of measurements from a device in FHIR. And time period controls the latency at which Observation resources generated by the MedTech service are written to FHIR service. > [!NOTE]
+>
> The time period value is defaulted to 15 minutes and cannot be configured for preview. ## Transform
In the Transform stage, grouped-normalized messages are processed through FHIR d
At this point, [Device](https://www.hl7.org/fhir/device.html) resource, along with its associated [Patient](https://www.hl7.org/fhir/patient.html) resource, is also retrieved from the FHIR service using the device identifier present in the message. These resources are added as a reference to the Observation resource being created. > [!NOTE]
+>
> All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients it is advised you create a virtual device resource that is specific to the patient and send virtual device identifier in the message payload. The virtual device can be linked to the actual device resource as a parent. If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Patient resources on the FHIR service.
Once the Observation FHIR resource is generated in the Transform stage, the reso
## Next steps
-To learn how to create Device and FHIR destination mappings, see
+To learn how to create device and FHIR destination mappings, see
> [!div class="nextstepaction"] > [Device mappings](how-to-use-device-mappings.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 08/09/2022-+
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
IoT Central lets you upload media and other files from connected devices to cloud storage. You configure the file upload capability in your IoT Central application, and then implement file uploads in your device code.
+To learn how to upload files by using the IoT Central REST API, see [How to use the IoT Central REST API to upload a file.](../core/howto-upload-file-rest-api.md)
+ ## Prerequisites You must be an administrator in your IoT Central application to configure file uploads.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to control devices by using the IoT Central UI, see [Use properties in an Azure IoT Central solution](../core/howto-use-properties.md) and [How to use commands in an Azure IoT Central solution()](../core/howto-use-commands.md)
+ ## Components and modules Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md).
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Data explorer** on the left pane.
+To learn how to query devices by using the IoT Central REST API, see [How to use the IoT Central REST API to query devices.](../core/howto-query-with-rest-api.md)
+ ## Understand the data explorer UI The analytics user interface has three main components:
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
To create an Azure IoT Central application, select **Build** in the left menu. C
![Create Application Page for CSPs Billing Info](media/howto-create-and-manage-applications-csp/image4-2.png)
-## Pricing plan
-
-You can only create applications that use a standard pricing plan as a CSP. To showcase Azure IoT Central to your customer, you can create an application that uses the free pricing plan separately. Learn more about the free and standard pricing plans on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/).
-
-You can only create applications that use a standard pricing plan as a CSP. To showcase Azure IoT Central to your customer, you can create an application that uses the free pricing plan separately. Learn more about the free and standard pricing plans on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/).
- ## Application name The name of your application is displayed on the **Application Manager** page and within each Azure IoT Central application. You can choose any name for your Azure IoT Central application. Choose a name that makes sense to you and to others in your organization.
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Whichever approach you choose, the configuration options are the same, and the p
[!INCLUDE [Warning About Access Required](../../../includes/iot-central-warning-contribitorrequireaccess.md)]
+To learn how to manage IoT Central application by using the IoT Central REST API, see [Use the REST API to create and manage IoT Central applications.](../core/howto-manage-iot-central-with-rest-api.md)
+ ## Options This section describes the available options when you create an IoT Central application. Depending on the method you choose, you might need to supply the options on a form or as command-line parameters: ### Pricing plans
-The *free* plan lets you create an IoT Central application to try for seven days. The free plan:
--- Doesn't require an Azure subscription.-- Can only be created and managed on the [Azure IoT Central](https://aka.ms/iotcentral) site.-- Lets you connect up to five devices.-- Can be upgraded to a standard plan if you want to keep your application.- The *standard* plans: -- Do require an Azure subscription. You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).
+- You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).
- Let you create and manage IoT Central applications using any of the available methods. - Let you connect as many devices as you need. You're billed by device. To learn more, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/).-- Cannot be downgraded to a free plan, but can be upgraded or downgraded to other standard plans.
+- Can be upgraded or downgraded to other standard plans.
The following table summarizes the differences between the three standard plans:
The **My apps** page lists all the IoT Central applications you have access to.
## Copy an application
-You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for. You can't create an application that uses the free pricing plan by copying an application.
+You can create a copy of any application, minus any device instances, device data history, and user data. The copy uses a standard pricing plan that you'll be billed for.
Select **Copy**. In the dialog box, enter the details for the new application. Then select **Copy** to confirm that you want to continue. To learn more about the fields in the form, see [Create an application](howto-create-iot-central-application.md).
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
The following screenshot shows an organization hierarchy definition in IoT Centr
:::image type="content" source="media/howto-create-organization/organizations-definition.png" alt-text="Screenshot of organizations hierarchy definition." lightbox="media/howto-create-organization/organizations-definition.png":::
+To learn how to manage organizations by using the IoT Central REST API, see [How to use the IoT Central REST API to manage organizations.](../core/howto-manage-organizations-with-rest-api.md)
+ ## Create a hierarchy To start using organizations, you need to define your organization hierarchy. Each organization in the hierarchy acts as a logical container where you place devices, save dashboards and device groups, and invite users. To create your organizations, go to the **Permissions** section in your IoT Central application, select the **Organizations** tab, and select either **+ New** or use the context menu for an existing organization. To create one or many organizations at a time, select **+ Add another organization**:
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
To help you avoid any unintended consequences from editing a device template, th
To learn more about device templates and how to create one, see [What are device templates?](concepts-device-templates.md) and [Set up a device template](howto-set-up-template.md).
+To learn how to manage device templates by using the IoT Central REST API, see [How to use the IoT Central REST API to manage device templates.](../core/howto-manage-device-templates-with-rest-api.md)
+ ## Modify a device template Additive changes, such as adding a capability or interface to a model are non-breaking changes. You can make additive changes to a model at any stage of the development life cycle.
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
Now that you have a destination to export data to, follow these steps to set up
4. Enter a name for the export. In the drop-down list box, select your **namespace**, or **Enter a connection string**.
- - You only see storage accounts, Event Hubs namespaces, and Service Bus namespaces in the same subscription as your IoT Central application. If you want to export to a destination outside of this subscription, choose **Enter a connection string** and see step 6.
- - For apps created using the free pricing plan, the only way to configure data export is through a connection string. Apps on the free pricing plan don't have an associated Azure subscription.
+ > [!Tip]
+ > You only see storage accounts, Event Hubs namespaces, and Service Bus namespaces in the same subscription as your IoT Central application. If you want to export to a destination outside of this subscription, choose **Enter a connection string** and see step 6.
![Create new Event Hub](media/howto-export-data-legacy/export-event-hub.png)
iot-central Howto Export To Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-to-blob-storage.md
This article describes how to configure data export to send data to the Blob St
[!INCLUDE [iot-central-data-export](../../../includes/iot-central-data-export.md)]
+To learn how to manage data export by using the IoT Central REST API, see [How to use the IoT Central REST API to manage data exports.](../core/howto-manage-data-export-with-rest-api.md)
+ ## Set up a Blob Storage export destination
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage data export by using the IoT Central UI, see [Export IoT data to Blob Storage.](../core/howto-export-to-blob-storage.md)
+ ## Data export You can use the IoT Central data export feature to stream telemetry, property changes, device connectivity events, device lifecycle events, and device template lifecycle events to destinations such as Azure Event Hubs, Azure Service Bus, Azure Blob Storage, and webhook endpoints.
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage device templates by using the IoT Central UI, see [How to set up device templates](../core/howto-set-up-template.md) and [How to edit device templates](../core/howto-edit-device-template.md)
+ ## Device templates A device template contains a device model, cloud property definitions, and view definitions. The REST API lets you manage the device model and cloud property definitions. Use the UI to create and manage views.
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
You can use Azure IoT Central to manage your connected devices at scale through jobs. Jobs let you do bulk updates to device and cloud properties and run commands. You can also use CSV files to import and export devices in bulk. This article shows you how to get started with using jobs in your own application and how to use the import and export features.
+To learn how to manage jobs by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices.](../core/howto-manage-jobs-with-rest-api.md)
+ ## Create and run a job The following example shows you how to create and run a job to set the light threshold for a group of devices. You use the job wizard to create and run jobs. You can save a job to run later.
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
This article describes how you manage devices in your Azure IoT Central applicat
To learn how to manage custom groups of devices, see [Tutorial: Use device groups to analyze device telemetry](tutorial-use-device-groups.md).
+To learn how to manage devices by using the IoT Central REST API, see [How to use the IoT Central REST API to manage devices.](../core/howto-manage-devices-with-rest-api.md)
+ ## View your devices To view an individual device:
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage devices by using the IoT Central UI, see [Manage individual devices in your Azure IoT Central application.](../core/howto-manage-devices-individually.md)
+ ## Devices REST API The IoT Central REST API lets you:
iot-central Howto Manage Iot Central With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-with-rest-api.md
To use this API, you need a bearer token for the `management.azure.com` resource
az account get-access-token --resource https://management.azure.com ```
+To learn how to manage IoT Central application by using the IoT Central UI, see [Create an IoT Central application.](../core/howto-create-iot-central-application.md)
+ ## List your applications To get a list of the IoT Central applications in a subscription:
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
To learn how to create and manage jobs in the UI, see [Manage devices in bulk in
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage jobs by using the IoT Central UI, see [Manage devices in bulk in your Azure IoT Central application.](../core/howto-manage-devices-in-bulk.md)
+ ## Job payloads Many of the APIs described in this article include a definition that looks like the following JSON snippet:
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
To learn more about organizations in IoT Central Application, see [Manage IoT Ce
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage organizations by using the IoT Central UI, see [Manage IoT Central organizations.](../core/howto-create-organizations.md)
+ ## Organizations REST API The IoT Central REST API lets you:
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to manage users and roles by using the IoT Central UI, see [Manage users and roles in your IoT Central application.](../core/howto-manage-users-roles.md)
+ ## Manage roles The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application:
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
This article describes how you can add, edit, and delete users in your Azure IoT
To access and use the **Permissions** section, you must be in the **App Administrator** role for an Azure IoT Central application or in a custom role that includes administration permissions. If you create an Azure IoT Central application, you're automatically added to the **App Administrator** role for that application.
+To learn how to manage users and roles by using the IoT Central REST API, see [How to use the IoT Central REST API to manage users and roles.](../core/howto-manage-users-roles-with-rest-api.md)
+ ## Add users Every user must have a user account before they can sign in and access an application. IoT Central currently supports Microsoft user accounts, Azure Active Directory accounts, and Azure Active Directory service principals. IoT Central doesn't currently support Azure Active Directory groups. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Azure Active Directory](../../active-directory/fundamentals/add-users-azure-active-directory.md).
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to query devices by using the IoT Central UI, see [How to use data explorer to analyze device data.](../core/howto-create-analytics.md)
+ ## Run a query Use the following request to run a query:
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
The device template has the following sections:
To learn more, see [What are device templates?](concepts-device-templates.md).
+To learn how to manage device templates by using the IoT Central REST API, see [How to use the IoT Central REST API to manage device templates.](../core/howto-manage-device-templates-with-rest-api.md)
+ ## Create a device template You have several options to create device templates:
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
[!INCLUDE [iot-central-postman-collection](../../../includes/iot-central-postman-collection.md)]
+To learn how to upload files by using the IoT Central UI, see [How to configure file uploads.](../core/howto-configure-file-uploads.md)
+ ## Prerequisites To test the file upload, install the following prerequisites in your local development environment:
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
A device can:
By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article.
+To learn how to manage commands by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md)
+ ## Define your commands Standard commands are sent to a device to instruct the device to do something. A command can include parameters with additional information. For example, a command to open a valve on a device could have a parameter that specifies how much to open the valve. Commands can also receive a return value when the device completes the command. For example, a command that asks a device to run some diagnostics could receive a diagnostics report as a return value.
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Properties represent point-in-time values. For example, a device can use a prope
You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article.
+To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md)
+ ## Define your properties Properties are data fields that represent the state of your device. Use properties to represent the durable state of the device, such as the on/off state of a device. Properties can also represent basic device properties, such as the software version of the device. You declare properties as read-only or writable.
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
An administrator can configure the behavior and appearance of an IoT Central app
- [Change application name and URL](howto-administer.md#change-application-name-and-url) - [Customize application UI](howto-customize-ui.md)-- [Move an application to a different pricing plans](howto-faq.yml#how-do-i-move-from-a-free-to-a-standard-pricing-plan-) ## Configure device file upload
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
IoT Central applications are fully hosted by Microsoft, which reduces the admini
## Pricing
-You can create IoT Central application using a 7-day free trial, or use a standard pricing plan.
--- Applications you create using the *free* plan are free for seven days and support up to five devices. You can convert them to use a standard pricing plan at any time before they expire.-- Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).
+Applications you create using the *standard* plan are billed on a per device basis, you can choose either **Standard 0**, **Standard 1**, or **Standard 2** pricing plan with the first two devices being free. Learn more about [IoT Central pricing](https://aka.ms/iotcentral-pricing).
## User roles
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
The IoT Central platform provides two extensibility options: Continuous Data Exp
In this tutorial, you learn how to: -- Create the Smart Meter App for free
+- Create the smart meter app
- Application walk-through - Clean up resources ## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create a smart meter monitoring application
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md
The IoT Central platform provides two extensibility options: Continuous Data Exp
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create a solar panel app for free
+> * Create a solar panel app
> * Walk through the application > * Clean up resources ## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create a solar panel monitoring application
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
In this tutorial, you learn how to:
## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create connected waste management application
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md
In this tutorial, you learn how to:
## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create water consumption monitoring application
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md
In this tutorial, you learn to:
## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create water quality monitoring application
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
In this tutorial, you learn how to:
## Prerequisites -- There are no specific prerequisites required to deploy this app.-- You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create application
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
In this tutorial, you learn how to:
## Prerequisites -- There are no specific prerequisites required to deploy this app.-- You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create in-store analytics application
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
In this tutorial, you learn how to:
## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create connected logistics application
Create the application using following steps:
* **Application name**: you can use default suggested name or enter your friendly application name.
- * **URL**: you can use suggested default URL or enter your friendly unique memorable URL. Next, the default setting is recommended if you already have an Azure Subscription. You can start with 7-day free trial pricing plan and choose to convert to a standard pricing plan at any time before the free trail expires.
+ * **URL**: you can use suggested default URL or enter your friendly unique memorable URL.
* **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources. * **Create**: Select create at the bottom of the page to deploy your application.
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
In this tutorial, you learn how to,
## Prerequisites
-* No specific pre-requisites required to deploy this app
-* Recommended to have Azure subscription, but you can even try without it
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create digital distribution center application template
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
In this tutorial, you learn how to,
## Prerequisites
-* No specific pre-requisites required to deploy this app.
-* Recommended to have Azure subscription, but you can even try without it.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create smart inventory management application
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
In this tutorial, you learn:
## Prerequisites
-* There are no specific prerequisites required to deploy this app.
-* You can use the free pricing plan or use an Azure subscription.
+An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create micro-fulfillment application
iot-develop Quickstart Devkit Espressif Esp32 Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos.md
Hardware:
- ESPRESSIF [ESP32-Azure IoT Kit](https://www.espressif.com/products/devkits/esp32-azure-kit/overview) - USB 2.0 A male to Micro USB male cable - Wi-Fi 2.4 GHz
+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Microchip Atsame54 Xpro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-microchip-atsame54-xpro.md
You'll complete the following tasks:
* Ethernet cable * Optional: [Weather Click](https://www.mikroe.com/weather-click) sensor. You can add this sensor to the device to monitor weather conditions. If you don't have this sensor, you can still complete this quickstart. * Optional: [mikroBUS Xplained Pro](https://www.microchip.com/Developmenttools/ProductDetails/ATMBUSADAPTER-XPRO) adapter. Use this adapter to attach the Weather Click sensor to the Microchip E54. If you don't have the sensor and this adapter, you can still complete this quickstart.
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
You'll complete the following tasks:
* The [MXCHIP AZ3166 IoT DevKit](https://www.seeedstudio.com/AZ3166-IOT-Developer-Kit.html) (MXCHIP DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Nxp Mimxrt1060 Evk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-nxp-mimxrt1060-evk.md
You'll complete the following tasks:
* USB 2.0 A male to Micro USB male cable * Wired Ethernet access * Ethernet cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Renesas Rx65n 2Mb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-2mb.md
You will complete the following tasks:
* The included 5V power supply * Ethernet cable * Wired Ethernet access
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Renesas Rx65n Cloud Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-renesas-rx65n-cloud-kit.md
You'll complete the following tasks:
* The [Renesas RX65N Cloud Kit](https://www.renesas.com/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-cloud-kit-renesas-rx65n-cloud-kit) (Renesas RX65N) * two USB 2.0 A male to Mini USB male cables * WiFi 2.4 GHz
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Stm B L475e Freertos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-freertos.md
Hardware:
- STM [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) devkit - USB 2.0 A male to Micro USB male cable - Wi-Fi 2.4 GHz
+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Stm B L475e https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e.md
You will complete the following tasks:
* The [B-L475E-IOT01A](https://www.st.com/en/evaluation-tools/b-l475e-iot01a.html) (STM DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
You'll complete the following tasks:
* Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands :::zone pivot="iot-toolset-cmake"+ ## Prerequisites * A PC running Windows 10
You'll complete the following tasks:
* The [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit) * Wi-Fi 2.4 GHz * USB 2.0 A male to Micro USB male cable
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prepare the development environment
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
+
+ Title: 'Configure package repository for package updates | Microsoft Docs'
+description: Follow an example to configure package repository for package updates.
++ Last updated : 8/8/2022+++
+# Introduction to configuring package repository
+
+This article describes how to configure or modify the source package repository used with [Package updates](device-update-ubuntu-agent.md).
+
+Such as:
+- You need to deliver over-the-air updates to your devices from a private package repository with approved versions of libraries and components
+- You need devices to get packages from a specific vendor's repository
+
+Following this document, learn how to configure a package repository using [OSConfig for IoT](https://docs.microsoft.com/azure/osconfig/overview-osconfig-for-iot) and deploy packages based updates from that repository to your device fleet using [Device Update for IoT Hub](understand-device-update.md). Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images.
+
+## Prerequisites
+
+You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started:
+- Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md).
+- Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](https://docs.microsoft.com/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).
+- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent#manually-prepare-a-device.md).
+- Install the OSConfig agent on the device. See [how to](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom).
+- Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
+
+## How to configure package repository for package updates
+Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration.
+
+1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](https://docs.microsoft.com/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device.
+2. Upload your packages to the above configured repository.
+3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository.
+4. Follow steps from [here](device-update-ubuntu-agent#prerequisites.md) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
+5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent#monitor-the-update-deployment.md).
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
|Trusted service|Supported usage scenarios| | | |
-|Azure Virtual Machines deployment service|[Deploy certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault).|
-|Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).|
-|Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).|
-|Azure Backup|Allow backup and restore of relevant keys and secrets during Azure Virtual Machines backup, by using [Azure Backup](../../backup/backup-overview.md).|
-|Exchange Online & SharePoint Online|Allow access to customer key for Azure Storage Service Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview).|
-|Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)|
-|Azure App Service|App Service is trusted only for [Deploying Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html), for individual app itself, the outbound IPs can be added in Key Vault's IP-based rules|
-|Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
+| Azure API Management|[Deploy certificates for Custom Domain from Key Vault using MSI](../../api-management/api-management-howto-use-managed-service-identity.md#use-ssl-tls-certificate-from-azure-key-vault)|
+| Azure App Service|App Service is trusted only for [Deploying Azure Web App Certificate through Key Vault](https://azure.github.io/AppService/2016/05/24/Deploying-Azure-Web-App-Certificate-through-Key-Vault.html), for individual app itself, the outbound IPs can be added in Key Vault's IP-based rules|
+| Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md)
+| Azure Backup|Allow backup and restore of relevant keys and secrets during Azure Virtual Machines backup, by using [Azure Backup](../../backup/backup-overview.md).|
+| Azure CDN | [Configure HTTPS on an Azure CDN custom domain: Grant Azure CDN access to your key vault](../../cdn/cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#grant-azure-cdn-access-to-your-key-vault)|
+| Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md)
+| Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)|
+| Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.|
| Azure Database for MySQL | [Data encryption for Azure Database for MySQL](../../mysql/howto-data-encryption-cli.md) | | Azure Database for PostgreSQL Single server | [Data encryption for Azure Database for PostgreSQL Single server](../../postgresql/howto-data-encryption-cli.md) |
-|Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).|
-|Azure Data Lake Store|[Encryption of data in Azure Data Lake Store](../../data-lake-store/data-lake-store-encryption.md) with a customer-managed key.|
-|Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)|
-|Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)|
-|Azure API Management|[Deploy certificates for Custom Domain from Key Vault using MSI](../../api-management/api-management-howto-use-managed-service-identity.md#use-ssl-tls-certificate-from-azure-key-vault)|
-|Azure Data Factory|[Fetch data store credentials in Key Vault from Data Factory](https://go.microsoft.com/fwlink/?linkid=2109491)|
-|Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)|
-|Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)|
-|Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md)
-|Azure Container Registry|[Registry encryption using customer-managed keys](../../container-registry/container-registry-customer-managed-keys.md)
-|Azure Application Gateway |[Using Key Vault certificates for HTTPS-enabled listeners](../../application-gateway/key-vault-certs.md)
-|Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate)
-|Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate)
-|Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md)
-|Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
+| Azure Databricks|[Fast, easy, and collaborative Apache SparkΓÇôbased analytics service](/azure/databricks/scenarios/what-is-azure-databricks)|
+| Azure Disk Encryption volume encryption service|Allow access to BitLocker Key (Windows VM) or DM Passphrase (Linux VM), and Key Encryption Key, during virtual machine deployment. This enables [Azure Disk Encryption](../../security/fundamentals/encryption-overview.md).|
+| Azure Event Hubs|[Allow access to a key vault for customer-managed keys scenario](../../event-hubs/configure-customer-managed-key.md)|
+| Azure Front Door Classic|[Using Key Vault certificates for HTTPS](../../frontdoor/front-door-custom-domain-https.md#prepare-your-key-vault-and-certificate)
+| Azure Front Door Standard/Premium|[Using Key Vault certificates for HTTPS](../../frontdoor/standard-premium/how-to-configure-https-custom-domain.md#prepare-your-key-vault-and-certificate)
+| Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md)
+| Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)|
+| Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
+| Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).|
+| Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)|
+| Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
+| Azure Storage|[Storage Service Encryption using customer-managed keys in Azure Key Vault](../../storage/common/customer-managed-keys-configure-key-vault.md).|
+| Azure Synapse Analytics|[Encryption of data using customer-managed keys in Azure Key Vault](../../synapse-analytics/security/workspaces-encryption.md)|
+| Azure Virtual Machines deployment service|[Deploy certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault).|
+| Exchange Online & SharePoint Online|Allow access to customer key for Azure Storage Service Encryption with [Customer Key](/microsoft-365/compliance/customer-key-overview).|
+| Microsoft Purview|[Using credentials for source authentication in Microsoft Purview](../../purview/manage-credentials.md)
> [!NOTE] > You must set up the relevant Key Vault access policies to allow the corresponding services to get access to Key Vault.
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
load-balancer Load Balancer Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-virtual-machine-scale-sets.md
When you use the virtual machine scale set in the back-end pool of the load bala
## Virtual Machine Scale Set Instance-level IPs
-When virtual machine scale sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). Note that when using a Standard Load Balancer, the individual instance IPs are all of type Standard "no-zone" (though the Load Balancer frontend could be zonal or zone-redundant).
+When virtual machine scale sets with [public IPs per instance](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md) are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard).
## Outbound rules
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
ms.suite: integration Previously updated : 04/15/2022 Last updated : 08/15/2022 #Customer intent: As a developer, I want to create an automated integration workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
As you progress, you'll complete these high-level tasks:
* To deploy your **Logic App (Standard)** resource to an [App Service Environment v3 (ASEv3)](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
+## Best practices and recommendations
+
+For optimal designer responsiveness and performance, review and follow these guidelines:
+
+- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance.
+
+- Consider splitting business logic into multiple workflows where necessary.
+
+- Have no more than 10-15 workflows per logic app resource.
+ <a name="create-logic-app-resource"></a> ## Create a Standard logic app resource
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 08/02/2022 Last updated : 08/16/2022 tags: connectors
An ISE provides access to resources that are protected by an Azure virtual netwo
The following list describes the prerequisites for the SAP client library that you're using with the connector:
-* Make sure that you install the latest version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.24.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). Earlier versions of SAP NCo might experience the following issues:
+* Make sure that you install the latest version, [SAP Connector (NCo 3.0) for Microsoft .NET 3.0.25.0 compiled with .NET Framework 4.0 - Windows 64-bit (x64)](https://support.sap.com/en/product/connectors/msnet.html). Earlier versions of SAP NCo might experience the following issues:
* When more than one IDoc message is sent at the same time, this condition blocks all later messages that are sent to the SAP destination, causing messages to time out.
The following list describes the prerequisites for the SAP client library that y
* The on-premises data gateway (June 2021 release) depends on the `SAP.Middleware.Connector.RfcConfigParameters.Dispose()` method in SAP NCo to free up resources.
+ * After you upgrade the SAP server environment, you get the following exception message: 'The only destination &lt;some-GUID&gt; available failed when retrieving metadata from &lt;SAP-system-ID&gt; -- see log for details'.
+ * You must have the 64-bit version of the SAP client library installed, because the data gateway only runs on 64-bit systems. Installing the unsupported 32-bit version results in a "bad image" error. * From the client library's default installation folder, copy the assembly (.dll) files to another location, based on your scenario as follows:
The following example is an RFC call with a table parameter. This example call a
<STFC_WRITE_TO_TCPIC xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/"> <RESTART_QNAME>exampleQName</RESTART_QNAME> <TCPICDAT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
<LINE>exampleFieldInput1</LINE> </ABAPTEXT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
<LINE>exampleFieldInput2</LINE> </ABAPTEXT>
- <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Rfc/">
+ <ABAPTEXT xmlns="http://Microsoft.LobServices.Sap/2007/03/Types/Rfc/">
<LINE>exampleFieldInput3</LINE> </ABAPTEXT> </TCPICDAT>
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022 ms.suite: integration
logic-apps Update Consumption Workflow Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/update-consumption-workflow-schema.md
+
+ Title: Update Consumption workflows to latest workflow schema
+description: Update Consumption logic app workflows to the latest Workflow Definition Language schema in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/15/2022++
+# Update Consumption logic app workflows to latest Workflow Definition Language schema version in Azure Logic Apps
+
+If you have a Consumption logic app workflow that uses an older Workflow Definition Language schema, you can update your workflow to use the newest schema. This capability applies only to Consumption logic app workflows.
+
+## Best practices
+
+The following list includes some best practices for updating your logic app workflows to the latest schema:
+
+* Don't overwrite your original workflow until after you finish your testing and confirm that your updated workflow works as expected.
+
+* Copy the updated script to a new logic app workflow.
+
+* Test your workflow *before* you deploy to production.
+
+* After you finish and confirm a successful migration, update your logic app workflows to use the latest [managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) where possible. For example, replace older versions of the Dropbox connector with the latest version.
+
+## Update workflow schema
+
+When you select the option to update the schema, Azure Logic Apps automatically runs the migration steps and provides the code output for you. You can use this output to update your workflow definition. However, before you update your workflow definition using this output, make sure that you review and follow the best practices as described in the [Best practices](#best-practices) section.
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+
+1. On your logic app's navigation menu, select **Overview**. On the toolbar, select **Update Schema**.
+
+ > [!NOTE]
+ >
+ > If the **Update Schema** command is unavailable, your workflow already uses the current schema.
+
+ ![Screenshot showing Azure portal, Consumption logic app resource with "Overview" pane open, and "Update Schema" selected.](./media/update-consumption-workflow-schema/update-schema.png)
+
+ The **Update Schema** pane opens to show a link to a document that describes the improvements in the new schema.
+
+## Next steps
+
+* [Review Workflow Definition Language schema updates - June 1, 2016](../logic-apps/logic-apps-schema-2016-04-01.md)
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
When performing inference, Azure Machine Learning creates a Docker container tha
[!INCLUDE [aml-deploy-target](../../includes/aml-compute-target-deploy.md)]
-Learn [where and how to deploy your model to a compute target](how-to-deploy-and-where.md).
+Learn [where and how to deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md).
<a name="amlcompute"></a> ## Azure Machine Learning compute (managed)
For more information, see [set up compute targets for model training and deploym
Learn how to: * [Use a compute target to train your model](how-to-set-up-training-targets.md)
-* [Deploy your model to a compute target](how-to-deploy-and-where.md)
+* [Deploy your model to a compute target](how-to-deploy-managed-online-endpoints.md)
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
You can use the following options for input data when invoking a batch endpoint:
> [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
For more information on supported input options, see [Batch scoring with batch endpoint](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-with-different-input-options).
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
You can also configure managed identities for use with Azure Machine Learning co
> [!TIP] > There are some exceptions to the use of Azure AD and Azure RBAC within Azure Machine Learning: > * You can optionally enable __SSH__ access to compute resources such as Azure Machine Learning compute instance and compute cluster. SSH access is based on public/private key pairs, not Azure AD. SSH access is not governed by Azure RBAC.
-> * You can authenticate to models deployed as web services (inference endpoints) using __key__ or __token__-based authentication. Keys are static strings, while tokens are retrieved using an Azure AD security object. For more information, see [Configure authentication for models deployed as a web service](how-to-authenticate-web-service.md).
+> * You can authenticate to models deployed as online endpoints using __key__ or __token__-based authentication. Keys are static strings, while tokens are retrieved using an Azure AD security object. For more information, see [How to authenticate online endpoints](how-to-authenticate-online-endpoint.md).
For more information, see the following articles: * [Authentication for Azure Machine Learning workspace](how-to-setup-authentication.md)
When deploying models as web services, you can enable transport-layer security (
* [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) * [Secure Azure Machine Learning web services with TLS](./v1/how-to-secure-web-service.md)
-* [Consume a Machine Learning model deployed as a web service](how-to-consume-web-service.md)
* [Use Azure Machine Learning with Azure Firewall](how-to-access-azureml-behind-firewall.md) * [Use Azure Machine Learning with Azure Virtual Network](how-to-network-security-overview.md) * [Data encryption at rest and in transit](concept-data-encryption.md)
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Before you deploy a model into production, it's packaged into a Docker image. In
If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging.
-For more information, see [Deploy models](how-to-deploy-and-where.md#registermodel) and [Troubleshooting deployments](how-to-troubleshoot-deployment.md).
+For more information, see [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
### Convert and optimize models
To deploy the model to an endpoint, you must provide the following items:
* Dependencies required to use the model. Examples are a script that accepts requests and invokes the model and conda dependencies. * Deployment configuration that describes how and where to deploy the model.
-For more information, see [Deploy models](how-to-deploy-and-where.md).
+For more information, see [Deploy online endpoints](how-to-deploy-managed-online-endpoints.md).
#### Controlled rollout
Monitoring enables you to understand what data is being sent to your model, and
This information helps you understand how your model is being used. The collected input data might also be useful in training future versions of the model.
-For more information, see [Enable model data collection](how-to-enable-data-collection.md).
+For more information, see [Enable model data collection](v1/how-to-enable-data-collection.md).
## Retrain your model on new data
You can also use Azure Data Factory to create a data ingestion pipeline that pre
Learn more by reading and exploring the following resources: + [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
-+ [How and where to deploy models](how-to-deploy-and-where.md) with Machine Learning
++ [How to deploy a model to an online endpoint](how-to-deploy-managed-online-endpoints.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) + [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps) + [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-+ Create clients that [consume a deployed model](how-to-consume-web-service.md)
+ [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale) + [Azure AI reference architectures and best practices repo](https://github.com/microsoft/AI)
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## August 16, 2022
+[Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
+
+Version `22.08.11`
+
+Main changes:
+
+- Jupyterlab upgraded to version `3.4.5`
+- `matplotlib`, `azureml-mlflow` added to `sdkv2` environment.
+- Jupyterhub spawner reconfigured to root environment.
+ ## July 28, 2022 [Data Science VM ΓÇô Ubuntu 20.04](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.ubuntu-2004?tab=Overview)
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri
1. Open your workspace in [Azure Machine Learning studio](https://ml.azure.com) 1. Open the menu on the top right and select **View all properties in Azure Portal**. You'll use Azure portal for all the rest of the steps in this article.
-1. Select the **Subscription** link in the middle of the page.
+1. Select the **Resource group** link in the middle of the page.
1. On the left, select **Access control (IAM)**. 1. At the top, select **+ Add > Add custom role**.
-1. For the **Custom role name**, type **Labeler**.
-1. In the **Description** box, add **Labeler access for data labeling projects**.
+1. For the **Custom role name**, type the name you want to use. For example, **Labeler**.
+1. In the **Description** box, add a description. For example, **Labeler access for data labeling projects**.
1. Select **Start from JSON**. 1. At the bottom of the page, select **Next**. 1. Don't do anything for the **Permissions** tab, you'll add permissions in a later step. Select **Next**.
To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri
:::image type="content" source="media/how-to-add-users/replace-lines.png" alt-text="Create custom role: select lines to replace them in the editor.":::
-1. Replace these two lines with:
-
- ```json
- "actions": [
- "Microsoft.MachineLearningServices/workspaces/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/write"
- ],
- "notActions": [
- ],
- ```
+1. Replace these two lines with the `Actions` and `NotActions` from the appropriate role listed at [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md#data-labeling). Make sure to copy from `Actions` through the closing bracket, `],`
1. Select **Save** at the top of the edit box to save your changes.
To add a custom role, you must have `Microsoft.Authorization/roleAssignments/wri
1. Select **Create** to create the custom role. 1. Select **OK**.
-### Labeling team lead
-
-You may want to create a second role for a labeling team lead. A labeling team lead can reject the labeled dataset and view labeling insights. In addition, this role also allows you to perform the role of a labeler.
-
-To add this custom role, repeat the above steps. Use the name **Labeling Team Lead** and replace the two lines with:
-
-```json
- "actions": [
- "Microsoft.MachineLearningServices/workspaces/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/write",
- "Microsoft.MachineLearningServices/workspaces/labeling/labels/reject/action",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/read",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/summary/read"
- ],
- "notActions": [
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/write",
- "Microsoft.MachineLearningServices/workspaces/labeling/projects/delete",
- "Microsoft.MachineLearningServices/workspaces/labeling/export/action"
- ],
-```
## Add guest user
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
In this article, learn how to create and manage compute targets in Azure Machine
## What's a compute target?
-With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
+With Azure Machine Learning, you can train your model on a variety of resources or environments, collectively referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You can also create compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-managed-online-endpoints.md).
## <a id="portal-view"></a>View compute targets
myvm = ComputeTarget(workspace=ws, name='my-vm-name')
* Use the compute resource to [submit a training run](how-to-set-up-training-targets.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models.
-* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
+* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md).
* [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
Use the Azure Machine Learning extension to validate, run, and debug your machin
1. Expand your workspace node. 1. Right-click the **Experiments** node and select **Create experiment**. When the prompt appears, provide a name for your experiment. 1. Expand the **Experiments** node, right-click the experiment you want to run and select **Run Experiment**.
-1. From the list of options to run your experiment, select **Locally**.
-1. **First time use on Windows only**. When prompted to allow File Share, select **Yes**. When you enable file share it allows Docker to mount the directory containing your script to the container. Additionally, it also allows Docker to store the logs and outputs from your run in a temporary directory on your system.
+1. From the list of options, select **Locally**.
+1. **First time use on Windows only**. When prompted to allow File Share, select **Yes**. When you enable file share, it allows Docker to mount the directory containing your script to the container. Additionally, it also allows Docker to store the logs and outputs from your run in a temporary directory on your system.
1. Select **Yes** to debug your experiment. Otherwise, select **No**. Selecting no will run your experiment locally without attaching to the debugger. 1. Select **Create new Run Configuration** to create your run configuration. The run configuration defines the script you want to run, dependencies, and datasets used. Alternatively, if you already have one, select it from the dropdown. 1. Choose your environment. You can choose from any of the [Azure Machine Learning curated](resource-curated-environments.md) or create your own.
For more information on using an Azure Virtual Network with Azure Machine Learni
Your ML pipeline steps run Python scripts. These scripts are modified to perform the following actions:
-1. Log the IP address of the host that they are running on. You use the IP address to connect the debugger to the script.
+1. Log the IP address of the host that they're running on. You use the IP address to connect the debugger to the script.
2. Start the debugpy debug component, and wait for a debugger to connect.
if not (args.output_train is None):
### Configure ML pipeline To provide the Python packages needed to start debugpy and get the run context, create an environment
-and set `pip_packages=['debugpy', 'azureml-sdk==<SDK-VERSION>']`. Change the SDK version to match the one you are using. The following code snippet demonstrates how to create an environment:
+and set `pip_packages=['debugpy', 'azureml-sdk==<SDK-VERSION>']`. Change the SDK version to match the one you're using. The following code snippet demonstrates how to create an environment:
```python # Use a RunConfiguration to specify some additional requirements for this step.
Timeout for debug connection: 300
ip_address: 10.3.0.5 ```
-Save the `ip_address` value. It is used in the next section.
+Save the `ip_address` value. It's used in the next section.
> [!TIP] > You can also find the IP address from the run logs for the child run for this pipeline step. For more information on viewing this information, see [Monitor Azure ML experiment runs and metrics](how-to-log-view-metrics.md).
Save the `ip_address` value. It is used in the next section.
## Debug and troubleshoot deployments
-In some cases, you may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason cannot be determined by additional logging. By using VS Code and the debugpy, you can attach to the code running inside the Docker container.
+In some cases, you may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason can't be determined by extra logging. By using VS Code and the debugpy, you can attach to the code running inside the Docker container.
> [!TIP] > Save time and catch bugs early by debugging managed online endpoints and deployments locally. For more information, see [Debug managed online endpoints locally in Visual Studio Code (preview)](how-to-debug-managed-online-endpoints-visual-studio-code.md).
In some cases, you may need to interactively debug the Python code contained in
> [!IMPORTANT] > This method of debugging does not work when using `Model.deploy()` and `LocalWebservice.deploy_configuration` to deploy a model locally. Instead, you must create an image using the [Model.package()](/python/api/azureml-core/azureml.core.model.model#package-workspace--models--inference-config-none--generate-dockerfile-false-) method.
-Local web service deployments require a working Docker installation on your local system. For more information on using Docker, see the [Docker Documentation](https://docs.docker.com/). Note that when working with compute instances, Docker is already installed.
+Local web service deployments require a working Docker installation on your local system. For more information on using Docker, see the [Docker Documentation](https://docs.docker.com/). When working with compute instances, Docker is already installed.
### Configure development environment
Local web service deployments require a working Docker installation on your loca
package.pull() ```
- Once the image has been created and downloaded (this process may take more than 10 minutes, so please wait patiently), the image path (includes repository, name, and tag, which in this case is also its digest) is finally displayed in a message similar to the following:
+ Once the image has been created and downloaded (this process may take more than 10 minutes), the image path (includes repository, name, and tag, which in this case is also its digest) is finally displayed in a message similar to the following:
```text Status: Downloaded newer image for myregistry.azurecr.io/package@sha256:<image-digest>
Local web service deployments require a working Docker installation on your loca
docker run -it --name debug -p 8000:5001 -p 5678:5678 -v <my_local_path_to_score.py>:/var/azureml-app/score.py debug:1 /bin/bash ```
- This attaches your `score.py` locally to the one in the container. Therefore, any changes made in the editor are automatically reflected in the container
+ This command attaches your `score.py` locally to the one in the container. Therefore, any changes made in the editor are automatically reflected in the container
-2. For a better experience, you can go into the container with a new VS code interface. Select the `Docker` extention from the VS Code side bar, find your local container created, in this documentation it's `debug:1`. Right-click this container and select `"Attach Visual Studio Code"`, then a new VS Code interface will be opened automatically, and this interface shows the inside of your created container.
+2. For a better experience, you can go into the container with a new VS code interface. Select the `Docker` extention from the VS Code side bar, find your local container created, in this documentation its `debug:1`. Right-click this container and select `"Attach Visual Studio Code"`, then a new VS Code interface will be opened automatically, and this interface shows the inside of your created container.
![The container VS Code interface](./media/how-to-troubleshoot-deployment/container-interface.png)
Local web service deployments require a working Docker installation on your loca
![The container run console output](./media/how-to-troubleshoot-deployment/container-run.png)
-4. To attach VS Code to debugpy inside the container, open VS Code and use the F5 key or select __Debug__. When prompted, select the __Azure Machine Learning Deployment: Docker Debug__ configuration. You can also select the __Run__ extention icon from the side bar, the __Azure Machine Learning Deployment: Docker Debug__ entry from the Debug dropdown menu, and then use the green arrow to attach the debugger.
+4. To attach VS Code to debugpy inside the container, open VS Code, and use the F5 key or select __Debug__. When prompted, select the __Azure Machine Learning Deployment: Docker Debug__ configuration. You can also select the __Run__ extention icon from the side bar, the __Azure Machine Learning Deployment: Docker Debug__ entry from the Debug dropdown menu, and then use the green arrow to attach the debugger.
![The debug icon, start debugging button, and configuration selector](./media/how-to-troubleshoot-deployment/start-debugging.png)
- After clicking the green arrow and attaching the debugger, in the container VS Code interface you can see some new information:
+ After you select the green arrow and attach the debugger, in the container VS Code interface you can see some new information:
![The container debugger attached information](./media/how-to-troubleshoot-deployment/debugger-attached.png)
Now that you've set up VS Code Remote, you can use a compute instance as remote
Learn more about troubleshooting:
-* [Local model deployment](how-to-troubleshoot-deployment-local.md)
-* [Remote model deployment](how-to-troubleshoot-deployment.md)
+* [Local model deployment](./v1/how-to-troubleshoot-deployment-local.md)
+* [Remote model deployment](./v1/how-to-troubleshoot-deployment.md)
* [Machine learning pipelines](how-to-debug-pipelines.md) * [ParallelRunStep](how-to-debug-parallel-run-step.md)
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
Below are some examples using different types of input data.
> - If you want to use local data, you can upload it to Azure Machine Learning registered datastore and use REST API for Cloud data. > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
#### Configure the output location and overwrite settings
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
With the generated model's training code you can,
* **Track/version/audit** trained models. Store versioned code to track what specific training code is used with the model that's to be deployed to production. * **Customize** the training code by changing hyperparameters or applying your ML and algorithms skills/experience, and retrain a new model with your customized code.
-You can generate the code for automated ML experiments with task types classification, regression, and time-series forecasting.
-
-> [!WARNING]
-> Computer vision models and natural language processing based models in AutoML do not currently support model's training code generation.
-
-The following diagram illustrates that you can enable code generation for any AutoML created model from the Azure Machine Learning studio UI or with the Azure Machine Learning SDK. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed.
+The following diagram illustrates that you can generate the code for automated ML experiments with all task types. First select a model. The model you selected will be highlighted, then Azure Machine Learning copies the code files used to create the model, and displays them into your notebooks shared folder. From here, you can view and customize the code as needed.
:::image type="content" source="media/how-to-generate-automl-training-code/code-generation-demonstration.png" alt-text="Screenshot showing models tab, as well as having a model selected, as explained in the above text.":::
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-cli.md
Previously updated : 01/05/2022 Last updated : 08/12/2022 -+ # Manage Azure Machine Learning workspaces using Azure CLI
-In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources.
-> [!NOTE]
-> Examples in this article refer to both CLI v1 and CLI v2 versions. If no version is specified for a command, it will work with either the v1 or CLI v2.
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
+> * [v1](v1/how-to-manage-workspace-cli.md)
+> * [v2 (current version)](how-to-manage-workspace-cli.md)
+
+In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension to the CLI provides commands for working with Azure Machine Learning resources.
## Prerequisites
-* An **Azure subscription**. If you do not have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* An **Azure subscription**. If you don't have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
* To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
In this article, you learn how to create and manage Azure Machine Learning works
Some of the Azure CLI commands communicate with Azure Resource Manager over the internet. This communication is secured using HTTPS/TLS 1.2.
-# [CLI v1](#tab/vnetpleconfigurationsv1cli)
-
-With the Azure Machine Learning CLI extension v1 (`azure-cli-ml`), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. **If your workspace is [secured with a private endpoint](how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension**.
+With the Azure Machine Learning CLI extension v2 ('ml'), all of the commands communicate with the Azure Resource Manager. This includes operational data such as YAML parameters and metadata. If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there's no extra configuration required. Communications are secured using HTTPS/TLS 1.2.
-# [CLI v2](#tab/vnetpleconfigurationsv2cli)
+If your Azure Machine Learning workspace uses a private endpoint and virtual network and you're using CLI v2, choose one of the following configurations to use:
-With the Azure Machine Learning CLI extension v2 ('ml'), all of the commands communicate with the Azure Resource Manager. This includes operational data such as YAML parameters and metadata. If your Azure Machine Learning workspace is public (that is, not behind a virtual network), then there is no additional configuration required. Communications are secured using HTTPS/TLS 1.2.
-
-If your Azure Machine Learning workspace uses a private endpoint and virtual network and you are using CLI v2, choose one of the following configurations to use:
-
-* If you are __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access:
+* If you're __OK__ with the CLI v2 communication over the public internet, use the following `--public-network-access` parameter for the `az ml workspace update` command to enable public network access. For example, the following command updates a workspace for public network access:
```azurecli az ml workspace update --name myworkspace --public-network-access enabled
If your Azure Machine Learning workspace uses a private endpoint and virtual net
For more information on CLI v2 communication, see [Install and set up the CLI](how-to-configure-cli.md#secure-communications). -- ## Connect the CLI to your Azure subscription > [!IMPORTANT]
To create a new workspace where the __services are automatically created__, use
az ml workspace create -w <workspace-name> -g <resource-group-name> ```
-# [Bring existing resources (CLI v1)](#tab/bringexistingresources1)
--
-To create a workspace that uses existing resources, you must provide the resource ID for each resource. You can get this ID either via the 'properties' tab on each resource via the Azure portal, or by running the following commands using the Azure CLI.
+# [Bring existing resources](#tab/bringexistingresources)
- * **Azure Storage Account**:
- `az storage account show --name <storage-account-name> --query "id"`
- * **Azure Application Insights**:
- `az monitor app-insights component show --app <application-insight-name> -g <resource-group-name> --query "id"`
- * **Azure Key Vault**:
- `az keyvault show --name <key-vault-name> --query "ID"`
- * **Azure Container Registry**:
- `az acr show --name <acr-name> -g <resource-group-name> --query "id"`
-
- The returned resource ID has the following format: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`.
-
-Once you have the IDs for the resource(s) that you want to use with the workspace, use the base `az workspace create -w <workspace-name> -g <resource-group-name>` command and add the parameter(s) and ID(s) for the existing resources. For example, the following command creates a workspace that uses an existing container registry:
-
-```azurecli-interactive
-az ml workspace create -w <workspace-name>
- -g <resource-group-name>
- --container-registry "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<acr-name>"
-```
-
-# [Bring existing resources (CLI v2)](#tab/bringexistingresources2)
--
-To create a new workspace while bringing existing associated resources using the CLI, you will first have to define how your workspace should be configured in a configuration file.
+To create a new workspace while bringing existing associated resources using the CLI, you'll first have to define how your workspace should be configured in a configuration file.
:::code language="YAML" source="~/azureml-examples-main/cli/resources/workspace/with-existing-resources.yml":::
If attaching existing resources, you must provide the ID for the resources. You
* **Azure Container Registry**: `az acr show --name <acr-name> -g <resource-group-name> --query "id"`
-The Resource ID value looks similar to the following: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`.
+The Resource ID value looks similar to the following text: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`.
The output of the workspace creation command is similar to the following JSON. Y
Dependent on your use case and organizational requirements, you can choose to configure Azure Machine Learning using private network connectivity. You can use the Azure CLI to deploy a workspace and a Private link endpoint for the workspace resource. For more information on using a private endpoint and virtual network (VNet) with your workspace, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md). For complex resource configurations, also refer to template based deployment options including [Azure Resource Manager](how-to-create-workspace-template.md).
-# [CLI v1](#tab/vnetpleconfigurationsv1cli)
--
-If you want to restrict access to your workspace to a virtual network, you can use the following parameters as part of the `az ml workspace create` command or use the `az ml workspace private-endpoint` commands.
-
-```azurecli-interactive
-az ml workspace create -w <workspace-name>
- -g <resource-group-name>
- --pe-name "<pe name>"
- --pe-auto-approval "<pe-autoapproval>"
- --pe-resource-group "<pe name>"
- --pe-vnet-name "<pe name>"
- --pe-subnet-name "<pe name>"
-```
-
-* `--pe-name`: The name of the private endpoint that is created.
-* `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
-* `--pe-resource-group`: The resource group to create the private endpoint in. Must be the same group that contains the virtual network.
-* `--pe-vnet-name`: The existing virtual network to create the private endpoint in.
-* `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`.
-
-For more details on how to use these commands, see the [CLI reference pages](/cli/azure/ml(v1)/workspace).
-
-# [CLI v2](#tab/vnetpleconfigurationsv2cli)
--
-When using private link, your workspace cannot use Azure Container Registry to build docker images. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property.
+When using private link, your workspace can't use Azure Container Registry to build docker images. Hence, you must set the image_build_compute property to a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the public_network_access property.
:::code language="YAML" source="~/azureml-examples-main/cli/resources/workspace/privatelink.yml":::
az network private-endpoint dns-zone-group add \
--zone-name 'privatelink.notebooks.azure.net' ``` -- ### Customer-managed key and high business impact workspace
-By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an additional set of resources in your Azure subscription to store your data.
+By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an extra set of resources in your Azure subscription to store your data.
To learn more about the resources that are created when you bring your own key for encryption, see [Data encryption with Azure Machine Learning](./concept-data-encryption.md#azure-cosmos-db).
-Below CLI commands provide examples for creating a workspace that uses customer-managed keys for encryption using the CLI v1 and CLI v2 versions.
-
-# [CLI v1](#tab/vnetpleconfigurationsv1cli)
--
-Use the `--cmk-keyvault` parameter to specify the Azure Key Vault that contains the key, and `--resource-cmk-uri` to specify the resource ID and uri of the key within the vault.
-
-To [limit the data that Microsoft collects](./concept-data-encryption.md#encryption-at-rest) on your workspace, you can additionally specify the `--hbi-workspace` parameter.
-
-```azurecli-interactive
-az ml workspace create -w <workspace-name>
- -g <resource-group-name>
- --cmk-keyvault "<cmk keyvault name>"
- --resource-cmk-uri "<resource cmk uri>"
- --hbi-workspace
-```
-
-# [CLI v2](#tab/vnetpleconfigurationsv2cli)
-- Use the `customer_managed_key` parameter and containing `key_vault` and `key_uri` parameters, to specify the resource ID and uri of the key within the vault. To [limit the data that Microsoft collects](./concept-data-encryption.md#encryption-at-rest) on your workspace, you can additionally specify the `hbi_workspace` property.
Then, you can reference this configuration file as part of the workspace creatio
```azurecli-interactive az ml workspace create -g <resource-group-name> --file cmk.yml ```- > [!NOTE] > Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription to manage the data encryption additional resources.
For more information on customer-managed keys and high business impact workspace
To get information about a workspace, use the following command:
-# [CLI v1](#tab/workspaceupdatev1)
--
-```azurecli-interactive
-az ml workspace show -w <workspace-name> -g <resource-group-name>
-```
-
-# [CLI v2](#tab/workspaceupdatev2)
-- ```azurecli-interactive az ml workspace show -n <workspace-name> -g <resource-group-name> ``` -- For more information, see the [az ml workspace show](/cli/azure/ml/workspace#az-ml-workspace-show) documentation. ### Update a workspace To update a workspace, use the following command:
-# [CLI v1](#tab/workspaceupdatev1)
--
-```azurecli-interactive
-az ml workspace update -w <workspace-name> -g <resource-group-name>
-```
-
-# [CLI v2](#tab/workspaceupdatev2)
-- ```azurecli-interactive az ml workspace update -n <workspace-name> -g <resource-group-name> ``` --- For more information, see the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) documentation. ### Sync keys for dependent resources If you change access keys for one of the resources used by your workspace, it takes around an hour for the workspace to synchronize to the new key. To force the workspace to sync the new keys immediately, use the following command:
-# [CLI v1](#tab/workspacesynckeysv1)
--
-```azurecli-interactive
-az ml workspace sync-keys -w <workspace-name> -g <resource-group-name>
-```
-
-# [CLI v2](#tab/workspacesynckeysv2)
-- ```azurecli-interactive az ml workspace sync-keys -n <workspace-name> -g <resource-group-name> ``` -- For more information on changing keys, see [Regenerate storage access keys](how-to-change-storage-access-key.md). For more information on the sync-keys command, see [az ml workspace sync-keys](/cli/azure/ml/workspace#az-ml-workspace-sync-keys).
For more information on the sync-keys command, see [az ml workspace sync-keys](/
[!INCLUDE [machine-learning-delete-workspace](../../includes/machine-learning-delete-workspace.md)]
-To delete a workspace after it is no longer needed, use the following command:
-
-# [CLI v1](#tab/workspacedeletev1)
---
-```azurecli-interactive
-az ml workspace delete -w <workspace-name> -g <resource-group-name>
-```
-
-# [CLI v2](#tab/workspacedeletev2)
-
+To delete a workspace after it's no longer needed, use the following command:
```azurecli-interactive az ml workspace delete -n <workspace-name> -g <resource-group-name> ``` --- > [!IMPORTANT] > Deleting a workspace does not delete the application insight, storage account, key vault, or container registry used by the workspace.
az group delete -g <resource-group-name>
For more information, see the [az ml workspace delete](/cli/azure/ml/workspace#az-ml-workspace-delete) documentation.
-If you accidentally deleted your workspace, are still able to retrieve your notebooks. Please refer to [this documentation](./how-to-high-availability-machine-learning.md#workspace-deletion).
+If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](./how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
## Troubleshooting
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-datasets.md
Learn how to monitor data drift and set alerts when drift is high.
With Azure Machine Learning dataset monitors (preview), you can: * **Analyze drift in your data** to understand how it changes over time.
-* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md).
+* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](v1/how-to-enable-data-collection.md).
* **Monitor new data** for differences between any baseline and target dataset. * **Profile features in data** to track how statistical properties change over time. * **Set up alerts on data drift** for early warnings to potential issues.
Limitations and known issues for data drift monitors:
## Next steps * Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) to set up a dataset monitor.
-* See how to set up data drift on [models deployed to Azure Kubernetes Service](./how-to-enable-data-collection.md).
-* Set up dataset drift monitors with [event grid](how-to-use-event-grid.md).
+* See how to set up data drift on [models deployed to Azure Kubernetes Service](v1/how-to-enable-data-collection.md).
+* Set up dataset drift monitors with [Azure Event Grid](how-to-use-event-grid.md).
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility.md
Previously updated : 10/21/2021 Last updated : 08/15/2022
Here are some things that may cause this problem:
## Best Practices
-* Refer to the [Load registered model](how-to-deploy-advanced-entry-script.md#load-registered-models) docs. When you register a model directory, don't include your scoring script, your mounted dependencies directory, or `requirements.txt` within that directory.
+* Refer to the [Load registered model](./v1/how-to-deploy-advanced-entry-script.md#load-registered-models) docs. When you register a model directory, don't include your scoring script, your mounted dependencies directory, or `requirements.txt` within that directory.
* For more information on how to load a registered or local model, see [Where and how to deploy](how-to-deploy-and-where.md?tabs=azcli#define-a-dummy-entry-script).
Here are some things that may cause this problem:
### 2021-07-26 * `AZUREML_EXTRA_REQUIREMENTS_TXT` and `AZUREML_EXTRA_PYTHON_LIB_PATH` are now always relative to the directory of the score script.
-For example, if the both the requirements.txt and score script is in **my_folder**, then `AZUREML_EXTRA_REQUIREMENTS_TXT` will need to be set to requirements.txt. No longer will `AZUREML_EXTRA_REQUIREMENTS_TXT` be set to **my_folder/requirements.txt**.
+For example, if both the requirements.txt and score script is in **my_folder**, then `AZUREML_EXTRA_REQUIREMENTS_TXT` will need to be set to requirements.txt. No longer will `AZUREML_EXTRA_REQUIREMENTS_TXT` be set to **my_folder/requirements.txt**.
## Next steps
-To learn more about deploying a model, see [How to deploy a model](how-to-deploy-and-where.md).
+To learn more about deploying a model, see [How to deploy a model](./v1/how-to-deploy-and-where.md).
To learn how to troubleshoot prebuilt docker image deployments, see [how to troubleshoot prebuilt Docker image deployments](how-to-troubleshoot-prebuilt-docker-image-inference.md).
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
+
+ Title: Schedule Azure Machine Learning pipeline jobs (preview)
+
+description: Learn how to schedule pipeline jobs that allow you to automate routine, time-consuming tasks such as data processing, training, and monitoring.
+++++ Last updated : 08/15/2022+++++
+# Schedule machine learning pipeline jobs (preview)
++
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time. Time-based schedules can be used to take care of routine tasks, such as retrain models or do batch predictions regularly to keep them up-to-date. After learning how to create schedules, you'll learn how to retrieve, update and deactivate them via CLI and SDK.
+
+## Prerequisites
+
+- You must have an Azure subscription to use Azure Machine Learning. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+# [Azure CLI](#tab/cliv2)
+
+- Install the Azure CLI and the `ml` extension. Follow the installation steps in [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+- Create an Azure Machine Learning workspace if you don't have one. For workspace creation, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+# [Python](#tab/python)
+
+- Create an Azure Machine Learning workspace if you don't have one.
+- The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
+++
+## Schedule a pipeline job
+
+To run a pipeline job on a recurring basis, you'll need to create a schedule. A `Schedule` associates a job, and a trigger. The trigger can either be `cron` that use cron expression to describe the wait between runs or `recurrence` that specify using what frequency to trigger job. In each case, you need to define a pipeline job first, it can be existing pipeline jobs or a pipeline job define inline, refer to [Create a pipeline job in CLI](how-to-create-component-pipelines-cli.md) and [Create a pipeline job in SDK](how-to-create-component-pipeline-python.md).
+
+You can schedule a pipeline job yaml in local or an existing pipeline job in workspace.
+
+## Create a schedule
+
+### Create a time-based schedule with recurrence pattern
+
+# [Azure CLI](#tab/cliv2)
+++
+`trigger` contains the following properties:
+
+- **(Required)** `type` specifies the schedule type is `recurrence`. It can also be `cron`, see details in the next section.
+
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_recurrence)]
+
+`RecurrenceTrigger` contains following properties:
+
+- **(Required)** To provide better coding experience, we use `RecurrenceTrigger` for recurrence schedule.
+++
+- **(Required)** `frequency` specifies the unit of time that describes how often the schedule fires. Can be `minute`, `hour`, `day`, `week`, `month`.
+
+- **(Required)** `interval` specifies how often the schedule fires based on the frequency, which is the number of time units to wait until the schedule fires again.
+
+- (Optional) `schedule` defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`.
+ - When `frequency` is `day`, pattern can specify `hours` and `minutes`.
+ - When `frequency` is `week` and `month`, pattern can specify `hours`, `minutes` and `weekdays`.
+ - `hours` should be an integer or a list, from 0 to 23.
+ - `minutes` should be an integer or a list, from 0 to 59.
+ - `weekdays` can be a string or list from `monday` to `sunday`.
+ - If `schedule` is omitted, the job(s) will be triggered according to the logic of `start_time`, `frequency` and `interval`.
+
+- (Optional) `start_time` describes the start date and time with timezone. If `start_time` is omitted, start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.
+
+- (Optional) `end_time` describes the end date and time with timezone. If `end_time` is omitted, the schedule will continue trigger jobs until the schedule is manually disabled.
+
+- (Optional) `time_zone` specifies the time zone of the recurrence. If omitted, by default is UTC. To learn more about timezone values, see [appendix for timezone values](reference-yaml-schedule.md#appendix).
+
+### Create a time-based schedule with cron expression
+
+# [Azure CLI](#tab/cliv2)
+++
+The `trigger` section defines the schedule details and contains following properties:
+
+- **(Required)** `type` specifies the schedule type is `cron`.
+
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule_cron)]
+
+The `CronTrigger` section defines the schedule details and contains following properties:
+
+- **(Required)** To provide better coding experience, we use `CronTrigger` for recurrence schedule.
+++
+- **(Required)** `expression` uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:
+
+ `MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`
+
+ - A single wildcard (`*`), which covers all values for the field. So a `*` in days means all days of a month (which varies with month and year).
+ - The `expression: "15 16 * * 1"` in the sample above means the 16:15PM on every Monday.
+ - The table below lists the valid values for each field:
+
+ | Field | Range | Comment |
+ |-|-|--|
+ | `MINUTES` | 0-59 | - |
+ | `HOURS` | 0-23 | - |
+ | `DAYS` | - | Not supported. The value will be ignored and treat as `*`. |
+ | `MONTHS` | - | Not supported. The value will be ignored and treat as `*`. |
+ | `DAYS-OF-WEEK` | 0-6 | Zero (0) means Sunday. Names of days also accepted. |
+
+ - To learn more about how to use crontab expression, see [Crontab Expression wiki on GitHub ](https://github.com/atifaziz/NCrontab/wiki/Crontab-Expression).
+
+ > [!IMPORTANT]
+ > `DAYS` and `MONTH` are not supported. If you pass a value, it will be ignored and treat as `*`.
+
+- (Optional) `start_time` specifies the start date and time with timezone of the schedule. `start_time: "2022-05-10T10:15:00-04:00"` means the schedule starts from 10:15:00AM on 2022-05-10 in UTC-4 timezone. If `start_time` is omitted, the `start_time` will be equal to schedule creation time. If the start time is in the past, the first job will run at the next calculated run time.
+
+- (Optional) `end_time` describes the end date and time with timezone. If `end_time` is omitted, the schedule will continue trigger jobs until the schedule is manually disabled.
+
+- (Optional) `time_zone`specifies the time zone of the expression. If omitted, by default is UTC. See [appendix for timezone values](reference-yaml-schedule.md#appendix).
+
+### Change runtime settings when defining schedule
+
+When defining a schedule using an existing job, you can change the runtime settings of the job. Using this approach, you can define multi-schedules using the same job with different inputs.
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=change_run_settings)]
+++
+Following properties can be changed when defining schedule:
+
+| Property | Description |
+| | |
+|settings| A dictionary of settings to be used when running the pipeline job. |
+|inputs| A dictionary of inputs to be used when running the pipeline job. |
+|outputs| A dictionary of inputs to be used when running the pipeline job. |
+|experiment_name|Experiment name of triggered job.|
+
+### Expressions supported in schedule
+
+When define schedule, we support following expression that will be resolved to real value during job runtime.
+
+| Expression | Description |Supported properties|
+|-|-|-|
+|`${{create_context.trigger_time}}`|The time when the schedule is triggered.|String type inputs of pipeline job|
+|`${{name}}`|The name of job.|outputs.path of pipeline job|
+
+## Manage schedule
+
+### Create schedule
+
+# [Azure CLI](#tab/cliv2)
++
+After you create the schedule yaml, you can use the following command to create a schedule via CLI.
++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)]
+++
+### Check schedule detail
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=show_schedule)]
+++
+### List schedules in a workspace
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=list_schedule)]
+++
+### Update a schedule
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=create_schedule)]
+++
+### Disable a schedule
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
+
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=disable_schedule)]
+++
+### Enable a schedule
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=enable_schedule)]
+++
+## Query triggered jobs from a schedule
+
+All the display name of jobs triggered by schedule will have the display name as <schedule_name>-YYYYMMDDThhmmssZ. For example, if a schedule with a name of named-schedule is created with a scheduled run every 12 hours starting at 6 AM on Jan 1 2021, then the display names of the jobs created will be as follows:
+
+- named-schedule-20210101T060000Z
+- named-schedule-20210101T180000Z
+- named-schedule-20210102T060000Z
+- named-schedule-20210102T180000Z, and so on
++
+You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to query the jobs triggered by a schedule name.
++++
+## Delete a schedule
+
+> [!IMPORTANT]
+> A schedule must be disabled to be deleted.
+
+# [Azure CLI](#tab/cliv2)
+++
+# [Python](#tab/python)
++
+[!notebook-python[] (~/azureml-examples-main/sdk/schedules/job-schedule.ipynb?name=delete_schedule)]
+++
+## Next steps
+
+* Learn more about the [CLI (v2) schedule YAML schema](./reference-yaml-schedule.md).
+* Learn how to [create pipeline job in CLI v2](how-to-create-component-pipelines-cli.md).
+* Learn how to [create pipeline job in SDK v2](how-to-create-component-pipeline-python.md).
+* Learn more about [CLI (v2) core YAML syntax](reference-yaml-core-syntax.md).
+* Learn more about [Pipelines](concept-ml-pipelines.md).
+* Learn more about [Component](concept-component.md).
++
+> [!NOTE]
+> Information the user should notice even if skimming
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-training-targets.md
method, or from the Experiment tab view in Azure Machine Learning studio client
* [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) uses a managed compute target to train a model. * See how to train models with specific ML frameworks, such as [Scikit-learn](how-to-train-scikit-learn.md), [TensorFlow](how-to-train-tensorflow.md), and [PyTorch](how-to-train-pytorch.md). * Learn how to [efficiently tune hyperparameters](how-to-tune-hyperparameters.md) to build better models.
-* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-and-where.md).
+* Once you have a trained model, learn [how and where to deploy models](how-to-deploy-managed-online-endpoints.md).
* View the [ScriptRunConfig class](/python/api/azureml-core/azureml.core.scriptrunconfig) SDK reference. * [Use Azure Machine Learning with Azure Virtual Networks](./how-to-network-security-overview.md)
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Learn how to set up authentication to your Azure Machine Learning workspace from
* __Service principal__: You create a service principal account in Azure Active Directory, and use it to authenticate or get a token. A service principal is used when you need an _automated process to authenticate_ to the service without requiring user interaction. For example, a continuous integration and deployment script that trains and tests a model every time the training code changes.
-* __Azure CLI session__: You use an active Azure CLI session to authenticate. The Azure CLI extension for Machine Learning (the `ml` extension or CLI v2) is a command line tool for working with Azure Machine Learning. You can log in to Azure via the Azure CLI on your local workstation, without storing credentials in Python code or prompting the user to authenticate. Similarly, you can reuse the same scripts as part of continuous integration and deployment pipelines, while authenticating the Azure CLI with a service principal identity.
+* __Azure CLI session__: You use an active Azure CLI session to authenticate. The Azure CLI extension for Machine Learning (the `ml` extension or CLI v2) is a command line tool for working with Azure Machine Learning. You can sign in to Azure via the Azure CLI on your local workstation, without storing credentials in Python code or prompting the user to authenticate. Similarly, you can reuse the same scripts as part of continuous integration and deployment pipelines, while authenticating the Azure CLI with a service principal identity.
* __Managed identity__: When using the Azure Machine Learning SDK v2 _on a compute instance_ or _on an Azure Virtual Machine_, you can use a managed identity for Azure. This workflow allows the VM to connect to the workspace using the managed identity, without storing credentials in Python code or prompting the user to authenticate. Azure Machine Learning compute clusters can also be configured to use a managed identity to access the workspace when _training models_.
The service principal can also be used to authenticate to the Azure Machine Lear
For information and samples on authenticating with MSAL, see the following articles: * JavaScript - [How to migrate a JavaScript app from ADAL.js to MSAL.js](../active-directory/develop/msal-compare-msal-js-and-adal-js.md).
-* Node.js - [How to migrate a Node.js app from ADAL to MSAL](../active-directory/develop/msal-node-migration.md).
-* Python - [ADAL to MSAL migration guide for Python](../active-directory/develop/migrate-python-adal-msal.md).
+* Node.js - [How to migrate a Node.js app from Microsoft Authentication Library to MSAL](../active-directory/develop/msal-node-migration.md).
+* Python - [Microsoft Authentication Library to MSAL migration guide for Python](../active-directory/develop/migrate-python-adal-msal.md).
## Use managed identity authentication
can require two-factor authentication, or allow sign in only from managed device
## Next steps * [How to use secrets in training](how-to-use-secrets-in-runs.md).
-* [How to configure authentication for models deployed as a web service](how-to-authenticate-web-service.md).
-* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
+* [How to authenticate to online endpoints](how-to-authenticate-online-endpoint.md).
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
## Next steps * [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
-* Monitor your production models for [data drift](./how-to-enable-data-collection.md).
+* Monitor your production models for [data drift](v1/how-to-enable-data-collection.md).
* [Track Azure Databricks runs with MLflow](how-to-use-mlflow-azure-databricks.md). * [Manage your models](concept-model-management-and-deployment.md).
machine-learning How To Troubleshoot Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-prebuilt-docker-image-inference.md
Previously updated : 10/21/2021 Last updated : 08/15/2022
Learn how to troubleshoot problems you may see when using prebuilt docker images
## Model deployment failed
-If model deployment fails, you won't see logs in [Azure Machine Learning Studio](https://ml.azure.com/) and `service.get_logs()` will return None.
+If model deployment fails, you won't see logs in [Azure Machine Learning studio](https://ml.azure.com/) and `service.get_logs()` will return None.
If there is a problem in the init() function of score.py, `service.get_logs()` will return logs for the same. So you'll need to run the container locally using one of the commands shown below and replace `<MCR-path>` with an image path. For a list of the images and paths, see [Prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md).
The local inference server allows you to quickly debug your entry script (`score
## For common model deployment issues
-For problems when deploying a model from Azure Machine Learning to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS), see [Troubleshoot model deployment](how-to-troubleshoot-deployment.md).
+For problems when deploying a model from Azure Machine Learning to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS), see [Troubleshoot model deployment](./v1/how-to-troubleshoot-deployment.md).
## init() or run() failing to write a file
GPU base images can't be used for local deployment, unless the local deployment
/var/azureml-app ```
-* If the `ENTRYPOINT` has been changed in the new built image, then the HTTP server and related components needs to be loaded by `runsvdir /var/runit`
+* If the `ENTRYPOINT` has been changed in the new built image, then the HTTP server and related components need to be loaded by `runsvdir /var/runit`
## Next steps
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Now you have an operational web service to generate predictions! You can test th
## Next steps
-* [Learn how to consume a web service](how-to-consume-web-service.md).
+* [Learn how to consume a web service](v1/how-to-consume-web-service.md).
* [Understand automated machine learning results](how-to-understand-automated-ml.md). * [Learn more about automated machine learning](concept-automated-ml.md) and Azure Machine Learning.
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
There are several options to specify the data inputs in CLI `invoke`.
> [!NOTE] > - If you are using existing V1 FileDataset for batch endpoint, we recommend migrating them to V2 data assets and refer to them directly when invoking batch endpoints. Currently only data assets of type `uri_folder` or `uri_file` are supported. Batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 Dataset. > - You can also extract the URI or path on datastore extracted from V1 FileDataset by using `az ml dataset show` command with `--query` parameter and use that information for invoke.
-> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-use-data.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
+> - While Batch endpoints created with earlier APIs will continue to support V1 FileDataset, we will be adding further V2 data assets support with the latest API versions for even more usability and flexibility. For more information on V2 data assets, see [Work with data using SDK v2 (preview)](how-to-read-write-data-v2.md). For more information on the new V2 experience, see [What is v2](concept-v2.md).
#### Configure the output location and overwrite settings
machine-learning How To Use Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-data.md
- Title: Work with data using SDK v2 (preview)-
-description: 'Learn to how work with data using the Python SDK v2 preview for Azure Machine Learning.'
----- Previously updated : 05/10/2022----
-# Work with data using SDK v2 preview
--
-Azure Machine Learning allows you to work with different types of data. In this article, you'll learn about using the Python SDK v2 to work with _URIs_ and _Tables_. URIs reference a location either local to your development environment or in the cloud. Tables are a tabular data abstraction.
-
-For most scenarios, you'll use URIs (`uri_folder` and `uri_file`). A URI references a location in storage that can be easily mapped to the filesystem of a compute node when you run a job. The data is accessed by either mounting or downloading the storage to the node.
-
-When using tables, you'll use `mltable`. It's an abstraction for tabular data that is used for AutoML jobs, parallel jobs, and some advanced scenarios. If you're just starting to use Azure Machine Learning, and aren't using AutoML, we strongly encourage you to begin with URIs.
-
-> [!TIP]
-> If you have dataset assets created using the SDK v1, you can still use those with SDK v2. For more information, see the [Consuming V1 Dataset Assets in V2](#consuming-v1-dataset-assets-in-v2) section.
---
-## Prerequisites
-
-* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace.
-* The Azure Machine Learning SDK v2 for Python
---
-## URIs
-
-The code snippets in this section cover the following scenarios:
-
-* Reading data in a job
-* Reading *and* writing data in a job
-* Registering data as an asset in Azure Machine Learning
-* Reading registered data assets from Azure Machine Learning in a job
-
-These snippets use `uri_file` and `uri_folder`.
--- `uri_file` is a type that refers to a specific file. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path/file.csv'`.-- `uri_folder` is a type that refers to a specific folder. For example, `'https://<account_name>.blob.core.windows.net/<container_name>/path'`. -
-> [!TIP]
-> We recommend using an argument parser to pass folder information into _data-plane_ code. By data-plane code, we mean your data processing and/or training code that you run in the cloud. The code that runs in your development environment and submits code to the data-plane is _control-plane_ code.
->
-> Data-plane code is typically a Python script, but can be any programming language. Passing the folder as part of job submission allows you to easily adjust the path from training locally using local data, to training in the cloud. For example, the following example uses `argparse` to get a `uri_folder`, which is joined with the file name to form a path:
->
-> ```python
-> # train.py
-> import argparse
-> import os
-> import pandas as pd
->
-> parser = argparse.ArgumentParser()
-> parser.add_argument("--input_folder", type=str)
-> args = parser.parse_args()
->
-> file_name = os.path.join(args.input_folder, "MY_CSV_FILE.csv")
-> df = pd.read_csv(file_name)
-> print(df.head(10))
-> # process data
-> # train a model
-> # etc
-> ```
->
-> If you wanted to pass in just an individual file rather than the entire folder you can use the `uri_file` type.
-
-Below are some common data access patterns that you can use in your *control-plane* code to submit a job to Azure Machine Learning:
-
-### Use data with a training job
-
-Use the tabs below to select where your data is located.
-
-# [Local data](#tab/use-local)
-
-When you pass local data, the data is automatically uploaded to cloud storage as part of the job submission.
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-my_job_inputs = {
- "input_data": Input(
- path='./sample_data', # change to be your local directory
- type=AssetTypes.URI_FOLDER
- )
-}
-
-job = command(
- code="./src", # local path where the code is stored
- command='python train.py --input_folder ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-# [ADLS Gen2](#tab/use-adls)
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, CommandJob
-from azure.ai.ml.constants import AssetTypes
-
-# in this example we
-my_job_inputs = {
- "input_data": Input(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-job = command(
- code="./src", # local path where the code is stored
- command='python train.py --input_folder ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-# [Blob](#tab/use-blob)
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, CommandJob
-from azure.ai.ml.constants import AssetTypes
-
-# in this example we
-my_job_inputs = {
- "input_data": Input(
- path='https://<account_name>.blob.core.windows.net/<container_name>/path',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-job = command(
- code="./src", # local path where the code is stored
- command='python train.py --input_folder ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
---
-### Read and write data in a job
-
-Use the tabs below to select where your data is located.
-
-# [Blob](#tab/rw-blob)
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, CommandJob, JobOutput
-from azure.ai.ml.constants import AssetTypes
-
-my_job_inputs = {
- "input_data": Input(
- path='https://<account_name>.blob.core.windows.net/<container_name>/path',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-my_job_outputs = {
- "output_folder": JobOutput(
- path='https://<account_name>.blob.core.windows.net/<container_name>/path',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-job = command(
- code="./src", #local path where the code is stored
- command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
- inputs=my_job_inputs,
- outputs=my_job_outputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-# [ADLS Gen2](#tab/rw-adls)
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, CommandJob, JobOutput
-from azure.ai.ml.constants import AssetTypes
-
-my_job_inputs = {
- "input_data": Input(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-my_job_outputs = {
- "output_folder": JobOutput(
- path='abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>',
- type=AssetTypes.URI_FOLDER
- )
-}
-
-job = command(
- code="./src", #local path where the code is stored
- command='python pre-process.py --input_folder ${{inputs.input_data}} --output_folder ${{outputs.output_folder}}',
- inputs=my_job_inputs,
- outputs=my_job_outputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
--
-### Register data assets
-
-```python
-from azure.ai.ml.entities import Data
-from azure.ai.ml.constants import AssetTypes
-
-# select one from:
-my_path = 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>' # adls gen2
-my_path = 'https://<account_name>.blob.core.windows.net/<container_name>/path' # blob
-
-my_data = Data(
- path=my_path,
- type=AssetTypes.URI_FOLDER,
- description="description here",
- name="a_name",
- version='1'
-)
-
-ml_client.data.create_or_update(my_data)
-```
-
-### Consume registered data assets in job
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, Input, CommandJob
-from azure.ai.ml.constants import AssetTypes
-
-registered_data_asset = ml_client.data.get(name='titanic', version='1')
-
-my_job_inputs = {
- "input_data": Input(
- type=AssetTypes.URI_FOLDER,
- path=registered_data_asset.id
- )
-}
-
-job = command(
- code="./src",
- command='python read_data_asset.py --input_folder ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-## Table
-
-An [MLTable](concept-data.md#mltable) is primarily an abstraction over tabular data, but it can also be used for some advanced scenarios involving multiple paths. The following YAML describes an MLTable:
-
-```yaml
-paths:
- - file: ./titanic.csv
-transformations:
- - read_delimited:
- delimiter: ','
- encoding: 'ascii'
- empty_as_string: false
- header: from_first_file
-```
-
-The contents of the MLTable file specify the underlying data location (here a local path) and also the transforms to perform on the underlying data before materializing into a pandas/spark/dask data frame. The important part here's that the MLTable-artifact doesn't have any absolute paths, making it *self-contained*. All the information stored in one folder; regardless of whether that folder is stored on your local drive or in your cloud drive or on a public http server.
-
-To consume the data in a job or interactive session, use `mltable`:
-
-```python
-import mltable
-
-tbl = mltable.load("./sample_data")
-df = tbl.to_pandas_dataframe()
-```
-
-For more information on the YAML file format, see [the MLTable file](how-to-create-register-data-assets.md#the-mltable-file).
-
-<!-- Commenting until notebook is published. For a full example of using an MLTable, see the [Working with MLTable notebook]. -->
-
-## Consuming V1 dataset assets in V2
-
-> [!NOTE]
-> While full backward compatibility is provided, if your intention with your V1 `FileDataset` assets was to have a single path to a file or folder with no loading transforms (sample, take, filter, etc.), then we recommend that you re-create them as a `uri_file`/`uri_folder` using the v2 CLI:
->
-> ```cli
-> az ml data create --file my-data-asset.yaml
-> ```
-
-Registered v1 `FileDataset` and `TabularDataset` data assets can be consumed in an v2 job using `mltable`. To use the v1 assets, add the following definition in the `inputs` section of your job yaml:
-
-```yaml
-inputs:
- my_v1_dataset:
- type: mltable
- path: azureml:myv1ds:1
- mode: eval_mount
-```
-
-The following example shows how to do this using the v2 SDK:
-
-```python
-from azure.ai.ml import Input, command
-from azure.ai.ml.entities import Data, CommandJob
-from azure.ai.ml.constants import AssetTypes
-
-registered_v1_data_asset = ml_client.data.get(name='<ASSET NAME>', version='<VERSION NUMBER>')
-
-my_job_inputs = {
- "input_data": Input(
- type=AssetTypes.MLTABLE,
- path=registered_v1_data_asset.id,
- mode="eval_mount"
- )
-}
-
-job = command(
- code="./src", #local path where the code is stored
- command='python train.py --input_data ${{inputs.input_data}}',
- inputs=my_job_inputs,
- environment="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu:9",
- compute="cpu-cluster"
-)
-
-#submit the command job
-returned_job = ml_client.jobs.create_or_update(job)
-#get a URL for the status of the job
-returned_job.services["Studio"].endpoint
-```
-
-## Next steps
-
-* [Install and set up Python SDK v2 (preview)](https://aka.ms/sdk-v2-install)
-* [Train models with the Python SDK v2 (preview)](how-to-train-sdk.md)
-* [Tutorial: Create production ML pipelines with Python SDK v2 (preview)](tutorial-pipeline-python-sdk.md)
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
Title: 'Migrate to Azure Machine Learning - Consume pipeline endpoints'
-description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (Classic).
+description: Learn how to integrate pipeline endpoints with client applications in Azure Machine Learning as part of migrating from Machine Learning Studio (classic).
Last updated 05/31/2022
[!INCLUDE [ML Studio (classic) retirement](../../includes/machine-learning-studio-classic-deprecation.md)]
-In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
+In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](v1/how-to-consume-web-service.md).
This article is part of the ML Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
You can call your Azure Machine Learning pipeline as a step in an Azure Data Fac
## Next steps
-In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
+In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](v1/how-to-consume-web-service.md).
See the rest of the articles in the Azure Machine Learning migration series:
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
This table shows the VM SKUs that are supported for Azure Machine Learning manag
> If you use a Windows-based image for your deployment, we recommend using a VM SKU that provides a minimum of 4 cores. | Size | General Purpose | Compute Optimized | Memory Optimized | GPU |
-| | | | | | |
-| V.Small | DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 |
+| | | | | |
+| V.Small | DS1 v2 <br/> DS2 v2 | F2s v2 | E2s v3 | NC4as_T4_v3 |
| Small | DS3 v2 | F4s v2 | E4s v3 | NC6s v2 <br/> NC6s v3 <br/> NC8as_T4_v3 | | Medium | DS4 v2 | F8s v2 | E8s v3 | NC12s v2 <br/> NC12s v3 <br/> NC16as_T4_v3 | | Large | DS5 v2 | F16s v2 | E16s v3 | NC24s v2 <br/> NC24s v3 <br/> NC64as_T4_v3 |
machine-learning Reference Yaml Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md
+
+ Title: 'CLI (v2) schedule YAML schema'
+
+description: Reference documentation for the CLI (v2) schedule YAML schema.
+++++++ Last updated : 08/15/2022+++
+# CLI (v2) schedule YAML schema
++
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/schedule.schema.json.
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `$schema` | string | The YAML schema. | |
+| `name` | string | **Required.** Name of the schedule. | |
+| `version` | string | Version of the schedule. If omitted, Azure ML will autogenerate a version. | |
+| `description` | string | Description of the schedule. | |
+| `tags` | object | Dictionary of tags for the schedule. | |
+| `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | |
+| `create_job` | object or string | **Required.** The definition of the job that will be triggered by a schedule. **One of `string` or `JobDefinition` is required.**| |
+
+### Trigger configuration
+
+#### Recurrence trigger
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `type` | string | **Required.** Specifies the schedule type. |recurrence|
+|`frequency`| string | **Required.** Specifies the unit of time that describes how often the schedule fires.|`minute`, `hour`, `day`, `week`, `month`|
+|`interval`| integer | **Required.** Specifies the interval at which the schedule fires.| |
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.|
+|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)|
+|`pattern`|object|Specifies the pattern of the recurrence. If pattern is omitted, the job(s) will be triggered according to the logic of start_time, frequency and interval.| |
+
+#### Recurrence schedule
+
+Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes`, and `weekdays`.
+
+- When frequency is `day`, pattern can specify `hours` and `minutes`.
+- When frequency is `week` and `month`, pattern can specify `hours`, `minutes` and `weekdays`.
+
+| Key | Type | Allowed values |
+| | - | -- |
+|`hours`|integer or array of integer|`0-23`|
+|`minutes`|integer or array of integer|`0-59`|
+|`week_days`|string or array of string|`monday`, `tuesday`, `wednesday`, `thursday`, `friday`, `saturday`, `sunday`|
++
+#### CronTrigger
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+| `type` | string | **Required.** Specifies the schedule type. |cron|
+| `expression` | string | **Required.** Specifies the cron expression to define how to trigger jobs. expression uses standard crontab expression to express a recurring schedule. A single expression is composed of five space-delimited fields:`MINUTES HOURS DAYS MONTHS DAYS-OF-WEEK`||
+|`start_time`| string |Describes the start date and time with timezone. If start_time is omitted, the first job will run instantly and the future jobs will be triggered based on the schedule, saying start_time will be equal to the job created time. If the start time is in the past, the first job will run at the next calculated run time.|
+|`end_time`| string |Describes the end date and time with timezone. If end_time is omitted, the schedule will continue to run until it's explicitly disabled.|
+|`timezone`| string |Specifies the time zone of the recurrence. If omitted, by default is UTC. |See [appendix for timezone values](#timezone)|
+
+### Job definition
+
+Customer can directly use `create_job: azureml:<job_name>` or can use the following properties to define the job.
+
+| Key | Type | Description | Allowed values |
+| | - | -- | -- |
+|`type`| string | **Required.** Specifies the job type. Only pipeline job is supported.|`pipeline`|
+|`job`| string | **Required.** Define how to reference a job, it can be `azureml:<job_name>` or a local pipeline job yaml such as `file:hello-pipeline.yml`.| |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, we'll take schedule name as default value. | |
+|`inputs`| object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value.| |
+|`outputs`|object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration.| |
+| `settings` | object | Default settings for the pipeline job. See [Attributes of the `settings` key](#attributes-of-the-settings-key) for the set of configurable properties. | |
+
+### Attributes of the `settings` key
+
+| Key | Type | Description | Default value |
+| | - | -- | - |
+| `default_datastore` | string | Name of the datastore to use as the default datastore for the pipeline job. This value must be a reference to an existing datastore in the workspace using the `azureml:<datastore-name>` syntax. Any outputs defined in the `outputs` property of the parent pipeline job or child step jobs will be stored in this datastore. If omitted, outputs will be stored in the workspace blob datastore. | |
+| `default_compute` | string | Name of the compute target to use as the default compute for all steps in the pipeline. If compute is defined at the step level, it will override this default compute for that specific step. This value must be a reference to an existing compute in the workspace using the `azureml:<compute-name>` syntax. | |
+| `continue_on_step_failure` | boolean | Whether the execution of steps in the pipeline should continue if one step fails. The default value is `False`, which means that if one step fails, the pipeline execution will be stopped, canceling any running steps. | `False` |
+
+### Job inputs
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder` | `uri_folder` |
+| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. | `ro_mount`, `download`, `direct` | `ro_mount` |
+
+### Job outputs
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` | `uri_folder` |
+| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. For more information on how to use the `azureml://` URI format, see [Core yaml syntax](reference-yaml-core-syntax.md). <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
+
+## Remarks
+
+The `az ml schedule` command can be used for managing Azure Machine Learning models.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/schedules). A couple are shown below.
+
+## YAML: Schedule with recurrence pattern
+++
+## YAML: Schedule with cron expression
+++
+## Appendix
+
+### Timezone
+
+Current schedule supports the following timezones. The key can be used directly in the Python SDK, while the value can be used in the YAML job. The table is organized by UTC(Coordinated Universal Time).
+
+| UTC | Key | Value |
+|-||--|
+| UTC -12:00 | DATELINE_STANDARD_TIME | "Dateline Standard Time" |
+| UTC -11:00 | UTC_11 | "UTC-11" |
+| UTC - 10:00 | ALEUTIAN_STANDARD_TIME | Aleutian Standard Time |
+| UTC - 10:00 | HAWAIIAN_STANDARD_TIME | "Hawaiian Standard Time" |
+| UTC -09:30 | MARQUESAS_STANDARD_TIME | "Marquesas Standard Time" |
+| UTC -09:00 | ALASKAN_STANDARD_TIME | "Alaskan Standard Time" |
+| UTC -09:00 | UTC_09 | "UTC-09" |
+| UTC -08:00 | PACIFIC_STANDARD_TIME_MEXICO | "Pacific Standard Time (Mexico)" |
+| UTC -08:00 | UTC_08 | "UTC-08" |
+| UTC -08:00 | PACIFIC_STANDARD_TIME | "Pacific Standard Time" |
+| UTC -07:00 | US_MOUNTAIN_STANDARD_TIME | "US Mountain Standard Time" |
+| UTC -07:00 | MOUNTAIN_STANDARD_TIME_MEXICO | "Mountain Standard Time (Mexico)" |
+| UTC -07:00 | MOUNTAIN_STANDARD_TIME | "Mountain Standard Time" |
+| UTC -06:00 | CENTRAL_AMERICA_STANDARD_TIME | "Central America Standard Time" |
+| UTC -06:00 | CENTRAL_STANDARD_TIME | "Central Standard Time" |
+| UTC -06:00 | EASTER_ISLAND_STANDARD_TIME | "Easter Island Standard Time" |
+| UTC -06:00 | CENTRAL_STANDARD_TIME_MEXICO | "Central Standard Time (Mexico)" |
+| UTC -06:00 | CANADA_CENTRAL_STANDARD_TIME | "Canada Central Standard Time" |
+| UTC -05:00 | SA_PACIFIC_STANDARD_TIME | "SA Pacific Standard Time" |
+| UTC -05:00 | EASTERN_STANDARD_TIME_MEXICO | "Eastern Standard Time (Mexico)" |
+| UTC -05:00 | EASTERN_STANDARD_TIME | "Eastern Standard Time" |
+| UTC -05:00 | HAITI_STANDARD_TIME | "Haiti Standard Time" |
+| UTC -05:00 | CUBA_STANDARD_TIME | "Cuba Standard Time" |
+| UTC -05:00 | US_EASTERN_STANDARD_TIME | "US Eastern Standard Time" |
+| UTC -05:00 | TURKS_AND_CAICOS_STANDARD_TIME | "Turks And Caicos Standard Time" |
+| UTC -04:00 | PARAGUAY_STANDARD_TIME | "Paraguay Standard Time" |
+| UTC -04:00 | ATLANTIC_STANDARD_TIME | "Atlantic Standard Time" |
+| UTC -04:00 | VENEZUELA_STANDARD_TIME | "Venezuela Standard Time" |
+| UTC -04:00 | CENTRAL_BRAZILIAN_STANDARD_TIME | "Central Brazilian Standard Time" |
+| UTC -04:00 | SA_WESTERN_STANDARD_TIME | "SA Western Standard Time" |
+| UTC -04:00 | PACIFIC_SA_STANDARD_TIME | "Pacific SA Standard Time" |
+| UTC -03:30 | NEWFOUNDLAND_STANDARD_TIME | "Newfoundland Standard Time" |
+| UTC -03:00 | TOCANTINS_STANDARD_TIME | "Tocantins Standard Time" |
+| UTC -03:00 | E_SOUTH_AMERICAN_STANDARD_TIME | "E. South America Standard Time" |
+| UTC -03:00 | SA_EASTERN_STANDARD_TIME | "SA Eastern Standard Time" |
+| UTC -03:00 | ARGENTINA_STANDARD_TIME | "Argentina Standard Time" |
+| UTC -03:00 | GREENLAND_STANDARD_TIME | "Greenland Standard Time" |
+| UTC -03:00 | MONTEVIDEO_STANDARD_TIME | "Montevideo Standard Time" |
+| UTC -03:00 | SAINT_PIERRE_STANDARD_TIME | "Saint Pierre Standard Time" |
+| UTC -03:00 | BAHIA_STANDARD_TIM | "Bahia Standard Time" |
+| UTC -02:00 | UTC_02 | "UTC-02" |
+| UTC -02:00 | MID_ATLANTIC_STANDARD_TIME | "Mid-Atlantic Standard Time" |
+| UTC -01:00 | AZORES_STANDARD_TIME | "Azores Standard Time" |
+| UTC -01:00 | CAPE_VERDE_STANDARD_TIME | "Cape Verde Standard Time" |
+| UTC | UTC | UTC |
+| UTC +00:00 | GMT_STANDARD_TIME | "GMT Standard Time" |
+| UTC +00:00 | GREENWICH_STANDARD_TIME | "Greenwich Standard Time" |
+| UTC +01:00 | MOROCCO_STANDARD_TIME | "Morocco Standard Time" |
+| UTC +01:00 | W_EUROPE_STANDARD_TIME | "W. Europe Standard Time" |
+| UTC +01:00 | CENTRAL_EUROPE_STANDARD_TIME | "Central Europe Standard Time" |
+| UTC +01:00 | ROMANCE_STANDARD_TIME | "Romance Standard Time" |
+| UTC +01:00 | CENTRAL_EUROPEAN_STANDARD_TIME | "Central European Standard Time" |
+| UTC +01:00 | W_CENTRAL_AFRICA_STANDARD_TIME | "W. Central Africa Standard Time" |
+| UTC +02:00 | NAMIBIA_STANDARD_TIME | "Namibia Standard Time" |
+| UTC +02:00 | JORDAN_STANDARD_TIME | "Jordan Standard Time" |
+| UTC +02:00 | GTB_STANDARD_TIME | "GTB Standard Time" |
+| UTC +02:00 | MIDDLE_EAST_STANDARD_TIME | "Middle East Standard Time" |
+| UTC +02:00 | EGYPT_STANDARD_TIME | "Egypt Standard Time" |
+| UTC +02:00 | E_EUROPE_STANDARD_TIME | "E. Europe Standard Time" |
+| UTC +02:00 | SYRIA_STANDARD_TIME | "Syria Standard Time" |
+| UTC +02:00 | WEST_BANK_STANDARD_TIME | "West Bank Standard Time" |
+| UTC +02:00 | SOUTH_AFRICA_STANDARD_TIME | "South Africa Standard Time" |
+| UTC +02:00 | FLE_STANDARD_TIME | "FLE Standard Time" |
+| UTC +02:00 | ISRAEL_STANDARD_TIME | "Israel Standard Time" |
+| UTC +02:00 | KALININGRAD_STANDARD_TIME | "Kaliningrad Standard Time" |
+| UTC +02:00 | LIBYA_STANDARD_TIME | "Libya Standard Time" |
+| UTC +03:00 | TURKEY_STANDARD_TIME | "Turkey Standard Time" |
+| UTC +03:00 | ARABIC_STANDARD_TIME | "Arabic Standard Time" |
+| UTC +03:00 | ARAB_STANDARD_TIME | "Arab Standard Time" |
+| UTC +03:00 | BELARUS_STANDARD_TIME | "Belarus Standard Time" |
+| UTC +03:00 | RUSSIAN_STANDARD_TIME | "Russian Standard Time" |
+| UTC +03:00 | E_AFRICA_STANDARD_TIME | "E. Africa Standard Time" |
+| UTC +03:30 | IRAN_STANDARD_TIME | "Iran Standard Time" |
+| UTC +04:00 | ARABIAN_STANDARD_TIME | "Arabian Standard Time" |
+| UTC +04:00 | ASTRAKHAN_STANDARD_TIME | "Astrakhan Standard Time" |
+| UTC +04:00 | AZERBAIJAN_STANDARD_TIME | "Azerbaijan Standard Time" |
+| UTC +04:00 | RUSSIA_TIME_ZONE_3 | "Russia Time Zone 3" |
+| UTC +04:00 | MAURITIUS_STANDARD_TIME | "Mauritius Standard Time" |
+| UTC +04:00 | GEORGIAN_STANDARD_TIME | "Georgian Standard Time" |
+| UTC +04:00 | CAUCASUS_STANDARD_TIME | "Caucasus Standard Time" |
+| UTC +04:30 | AFGHANISTAN_STANDARD_TIME | "Afghanistan Standard Time" |
+| UTC +05:00 | WEST_ASIA_STANDARD_TIME | "West Asia Standard Time" |
+| UTC +05:00 | EKATERINBURG_STANDARD_TIME | "Ekaterinburg Standard Time" |
+| UTC +05:00 | PAKISTAN_STANDARD_TIME | "Pakistan Standard Time" |
+| UTC +05:30 | INDIA_STANDARD_TIME | "India Standard Time" |
+| UTC +05:30 | SRI_LANKA_STANDARD_TIME | "Sri Lanka Standard Time" |
+| UTC +05:45 | NEPAL_STANDARD_TIME | "Nepal Standard Time" |
+| UTC +06:00 | CENTRAL_ASIA_STANDARD_TIME | "Central Asia Standard Time" |
+| UTC +06:00 | BANGLADESH_STANDARD_TIME | "Bangladesh Standard Time" |
+| UTC +06:30 | MYANMAR_STANDARD_TIME | "Myanmar Standard Time" |
+| UTC +07:00 | N_CENTRAL_ASIA_STANDARD_TIME | "N. Central Asia Standard Time" |
+| UTC +07:00 | SE_ASIA_STANDARD_TIME | "SE Asia Standard Time" |
+| UTC +07:00 | ALTAI_STANDARD_TIME | "Altai Standard Time" |
+| UTC +07:00 | W_MONGOLIA_STANDARD_TIME | "W. Mongolia Standard Time" |
+| UTC +07:00 | NORTH_ASIA_STANDARD_TIME | "North Asia Standard Time" |
+| UTC +07:00 | TOMSK_STANDARD_TIME | "Tomsk Standard Time" |
+| UTC +08:00 | CHINA_STANDARD_TIME | "China Standard Time" |
+| UTC +08:00 | NORTH_ASIA_EAST_STANDARD_TIME | "North Asia East Standard Time" |
+| UTC +08:00 | SINGAPORE_STANDARD_TIME | "Singapore Standard Time" |
+| UTC +08:00 | W_AUSTRALIA_STANDARD_TIME | "W. Australia Standard Time" |
+| UTC +08:00 | TAIPEI_STANDARD_TIME | "Taipei Standard Time" |
+| UTC +08:00 | ULAANBAATAR_STANDARD_TIME | "Ulaanbaatar Standard Time" |
+| UTC +08:45 | AUS_CENTRAL_W_STANDARD_TIME | "Aus Central W. Standard Time" |
+| UTC +09:00 | NORTH_KOREA_STANDARD_TIME | "North Korea Standard Time" |
+| UTC +09:00 | TRANSBAIKAL_STANDARD_TIME | "Transbaikal Standard Time" |
+| UTC +09:00 | TOKYO_STANDARD_TIME | "Tokyo Standard Time" |
+| UTC +09:00 | KOREA_STANDARD_TIME | "Korea Standard Time" |
+| UTC +09:00 | YAKUTSK_STANDARD_TIME | "Yakutsk Standard Time" |
+| UTC +09:30 | CEN_AUSTRALIA_STANDARD_TIME | "Cen. Australia Standard Time" |
+| UTC +09:30 | AUS_CENTRAL_STANDARD_TIME | "AUS Central Standard Time" |
+| UTC +10:00 | E_AUSTRALIAN_STANDARD_TIME | "E. Australia Standard Time" |
+| UTC +10:00 | AUS_EASTERN_STANDARD_TIME | "AUS Eastern Standard Time" |
+| UTC +10:00 | WEST_PACIFIC_STANDARD_TIME | "West Pacific Standard Time" |
+| UTC +10:00 | TASMANIA_STANDARD_TIME | "Tasmania Standard Time" |
+| UTC +10:00 | VLADIVOSTOK_STANDARD_TIME | "Vladivostok Standard Time" |
+| UTC +10:30 | LORD_HOWE_STANDARD_TIME | "Lord Howe Standard Time" |
+| UTC +11:00 | BOUGAINVILLE_STANDARD_TIME | "Bougainville Standard Time" |
+| UTC +11:00 | RUSSIA_TIME_ZONE_10 | "Russia Time Zone 10" |
+| UTC +11:00 | MAGADAN_STANDARD_TIME | "Magadan Standard Time" |
+| UTC +11:00 | NORFOLK_STANDARD_TIME | "Norfolk Standard Time" |
+| UTC +11:00 | SAKHALIN_STANDARD_TIME | "Sakhalin Standard Time" |
+| UTC +11:00 | CENTRAL_PACIFIC_STANDARD_TIME | "Central Pacific Standard Time" |
+| UTC +12:00 | RUSSIA_TIME_ZONE_11 | "Russia Time Zone 11" |
+| UTC +12:00 | NEW_ZEALAND_STANDARD_TIME | "New Zealand Standard Time" |
+| UTC +12:00 | UTC_12 | "UTC+12" |
+| UTC +12:00 | FIJI_STANDARD_TIME | "Fiji Standard Time" |
+| UTC +12:00 | KAMCHATKA_STANDARD_TIME | "Kamchatka Standard Time" |
+| UTC +12:45 | CHATHAM_ISLANDS_STANDARD_TIME | "Chatham Islands Standard Time" |
+| UTC +13:00 | TONGA__STANDARD_TIME | "Tonga Standard Time" |
+| UTC +13:00 | SAMOA_STANDARD_TIME | "Samoa Standard Time" |
+| UTC +14:00 | LINE_ISLANDS_STANDARD_TIME | "Line Islands Standard Time" |
+
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
Other available PyTorch environments:
### Sklearn
-**Name**: AzureML-sklearn-1.0-ubuntu20.04-py38-cpu
+**Name**: AzureML-sklearn-1.0-ubuntu20.04-py38-cpu
**Description**: An environment for tasks such as regression, clustering, and classification with Scikit-learn. Contains the AzureML Python SDK and other Python packages. * OS: Ubuntu20.04 * Scikit-learn: 1.0
Azure ML pipeline training workflows that use AutoML automatically selects a cur
For more information on AutoML and Azure ML pipelines, see [use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md). ## Support
-Version updates for supported environments, including the base images they reference, are released every two weeks to address vulnerabilities no older than 30 days. Based on usage, some environments may be deprecated (hidden from the product but usable) to support more common machine learning scenarios.
+Version updates for supported environments, including the base images they reference, are released every two weeks to address vulnerabilities no older than 30 days. Based on usage, some environments may be deprecated (hidden from the product but usable) to support more common machine learning scenarios.
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-1st-experiment-bring-data.md
You saw how to modify your training script to accept a data path via the command
Now that you have a model, learn: > [!div class="nextstepaction"]
-> [How to deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
+> [How to deploy models with Azure Machine Learning](how-to-deploy-managed-online-endpoints.md).
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-deploy.md
After deployment finishes, you can view your real-time endpoint by going to the
1. To test your endpoint, go to the **Test** tab. From here, you can enter test data and select **Test** verify the output of your endpoint.
-For more information on consuming your web service, see [Consume a model deployed as a webservice](how-to-consume-web-service.md).
- ## Update the real-time endpoint You can update the online endpoint with new model trained in the designer. On the online endpoint detail page, find your previous training pipeline job and inference pipeline job.
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-notebook.md
Use these steps to delete your Azure Machine Learning workspace and all compute
## Next steps
-+ Learn about all of the [deployment options for Azure Machine Learning](how-to-deploy-and-where.md).
-+ Learn how to [create clients for the web service](how-to-consume-web-service.md).
++ Learn about all of the [deployment options for Azure Machine Learning](how-to-deploy-managed-online-endpoints.md).++ Learn how to [authenticate to the deployed model](how-to-authenticate-online-endpoint.md). + [Make predictions on large quantities of data](./tutorial-pipeline-batch-scoring-classification.md) asynchronously. + Monitor your Azure Machine Learning models with [Application Insights](./v1/how-to-enable-app-insights.md). + Try out the [automatic algorithm selection](tutorial-auto-train-models.md) tutorial.
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
You deploy a [registered model](#model-registry) as a service endpoint. You need
* **Scoring code**. This script accepts requests, scores the requests by using the model, and returns the results. * **Inference configuration**. The inference configuration specifies the environment, entry script, and other components needed to run the model as a service.
-For more information about these components, see [Deploy models with Azure Machine Learning](../how-to-deploy-and-where.md).
+For more information about these components, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
### Endpoints
Azure Machine Learning provides the following monitoring and logging capabilitie
* [Track experiments with MLflow](../how-to-use-mlflow.md) * [Visualize runs with TensorBoard](../how-to-monitor-tensorboard.md) * For **Administrators**, you can monitor information about the workspace, related Azure resources, and events such as resource creation and deletion by using Azure Monitor. For more information, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
-* For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
+* For **DevOps** or **MLOps**, you can monitor information generated by models deployed as web services to identify problems with the deployments and gather data submitted to the service. For more information, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
## Interacting with your workspace
To get started with Azure Machine Learning, see:
* [What is Azure Machine Learning?](../overview-what-is-azure-machine-learning.md) * [Create an Azure Machine Learning workspace](../quickstart-create-resources.md)
-* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md)
+* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md)
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
Previously updated : 11/04/2021 Last updated : 08/15/2022 # MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning v1
Before deploying a model into production, it is packaged into a Docker image. In
If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging.
-For more information, see [Deploy models](../how-to-deploy-and-where.md#registermodel) and [Troubleshooting deployments](../how-to-troubleshoot-deployment.md).
+For more information, see [Deploy models](how-to-deploy-and-where.md#registermodel) and [Troubleshooting deployments](how-to-troubleshoot-deployment.md).
### Convert and optimize models
Monitoring enables you to understand what data is being sent to your model, and
This information helps you understand how your model is being used. The collected input data may also be useful in training future versions of the model.
-For more information, see [How to enable model data collection](../how-to-enable-data-collection.md).
+For more information, see [How to enable model data collection](how-to-enable-data-collection.md).
## Retrain your model on new data
Learn more by reading and exploring the following resources:
+ [CI/CD of ML models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-+ Create clients that [consume a deployed model](../how-to-consume-web-service.md)
++ Create clients that [consume a deployed model](how-to-consume-web-service.md) + [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
To use compute targets managed by Azure Machine Learning, see:
## What's a compute target?
-With Azure Machine Learning, you can train your model on various resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](../how-to-deploy-and-where.md).
+With Azure Machine Learning, you can train your model on various resources or environments, collectively referred to as [__compute targets__](concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine. You also use compute targets for model deployment as described in ["Where and how to deploy your models"](how-to-deploy-and-where.md).
## Local computer When you use your local computer for **training**, there is no need to create a compute target. Just [submit the training run](../how-to-set-up-training-targets.md) from your local machine.
-When you use your local computer for **inference**, you must have Docker installed. To perform the deployment, use [LocalWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#deploy-configuration-port-none-) to define the port that the web service will use. Then use the normal deployment process as described in [Deploy models with Azure Machine Learning](../how-to-deploy-and-where.md).
+When you use your local computer for **inference**, you must have Docker installed. To perform the deployment, use [LocalWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#deploy-configuration-port-none-) to define the port that the web service will use. Then use the normal deployment process as described in [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
## Remote virtual machines
See these notebooks for examples of training with various compute targets:
* Use the compute resource to [configure and submit a training run](../how-to-set-up-training-targets.md). * [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md) uses a managed compute target to train a model. * Learn how to [efficiently tune hyperparameters](../how-to-tune-hyperparameters.md) to build better models.
-* Once you have a trained model, learn [how and where to deploy models](../how-to-deploy-and-where.md).
+* Once you have a trained model, learn [how and where to deploy models](../how-to-deploy-managed-online-endpoints.md).
* [Use Azure Machine Learning with Azure Virtual Networks](../how-to-network-security-overview.md)
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-authenticate-web-service.md
+
+ Title: Configure authentication for models deployed as web services
+
+description: Learn how to configure authentication for machine learning models deployed to web services in Azure Machine Learning.
+++++ Last updated : 08/15/2022++++
+# Configure authentication for models deployed as web services
++
+Azure Machine Learning allows you to deploy your trained machine learning models as web services. In this article, learn how to configure authentication for these deployments.
+
+The model deployments created by Azure Machine Learning can be configured to use one of two authentication methods:
+
+* **key-based**: A static key is used to authenticate to the web service.
+* **token-based**: A temporary token must be obtained from the Azure Machine Learning workspace (using Azure Active Directory) and used to authenticate to the web service. This token expires after a period of time, and must be refreshed to continue working with the web service.
+
+ > [!NOTE]
+ > Token-based authentication is only available when deploying to Azure Kubernetes Service.
+
+## Key-based authentication
+
+Web-services deployed on Azure Kubernetes Service (AKS) have key-based auth *enabled* by default.
+
+Azure Container Instances (ACI) deployed services have key-based auth *disabled* by default, but you can enable it by setting `auth_enabled=True`when creating the ACI web-service. The following code is an example of creating an ACI deployment configuration with key-based auth enabled.
+
+```python
+from azureml.core.webservice import AciWebservice
+
+aci_config = AciWebservice.deploy_configuration(cpu_cores = 1,
+ memory_gb = 1,
+ auth_enabled=True)
+```
+
+Then you can use the custom ACI configuration in deployment using the `Model` class.
+
+```python
+from azureml.core.model import Model, InferenceConfig
++
+inference_config = InferenceConfig(entry_script="score.py",
+ environment=myenv)
+aci_service = Model.deploy(workspace=ws,
+ name="aci_service_sample",
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aci_config)
+aci_service.wait_for_deployment(True)
+```
+
+To fetch the auth keys, use `aci_service.get_keys()`. To regenerate a key, use the `regen_key()` function and pass either **Primary** or **Secondary**.
+
+```python
+aci_service.regen_key("Primary")
+# or
+aci_service.regen_key("Secondary")
+```
+
+## Token-based authentication
+
+When you enable token authentication for a web service, users must present an Azure Machine Learning JSON Web Token to the web service to access it. The token expires after a specified time-frame and needs to be refreshed to continue making calls.
+
+* Token authentication is **disabled by default** when you deploy to Azure Kubernetes Service.
+* Token authentication **isn't supported** when you deploy to Azure Container Instances.
+* Token authentication **can't be used at the same time as key-based authentication**.
+
+To control token authentication, use the `token_auth_enabled` parameter when you create or update a deployment:
+
+```python
+from azureml.core.webservice import AksWebservice
+from azureml.core.model import Model, InferenceConfig
+
+# Create the config
+aks_config = AksWebservice.deploy_configuration()
+
+# Enable token auth and disable (key) auth on the webservice
+aks_config = AksWebservice.deploy_configuration(token_auth_enabled=True, auth_enabled=False)
+
+aks_service_name ='aks-service-1'
+
+# deploy the model
+aks_service = Model.deploy(workspace=ws,
+ name=aks_service_name,
+ models=[model],
+ inference_config=inference_config,
+ deployment_config=aks_config,
+ deployment_target=aks_target)
+
+aks_service.wait_for_deployment(show_output = True)
+```
+
+If token authentication is enabled, you can use the `get_token` method to retrieve a JSON Web Token (JWT) and that token's expiration time:
+
+> [!TIP]
+> If you use a service principal to get the token, and want it to have the minimum required access to retrieve a token, assign it to the **reader** role for the workspace.
+
+```python
+token, refresh_by = aks_service.get_token()
+print(token)
+```
+
+> [!IMPORTANT]
+> You'll need to request a new token after the token's `refresh_by` time. If you need to refresh tokens outside of the Python SDK, one option is to use the REST API with service-principal authentication to periodically make the `service.get_token()` call, as discussed previously.
+>
+> We strongly recommend that you create your Azure Machine Learning workspace in the same region as your Azure Kubernetes Service cluster.
+>
+> To authenticate with a token, the web service will make a call to the region in which your Azure Machine Learning workspace is created. If your workspace region is unavailable, you won't be able to fetch a token for your web service, even if your cluster is in a different region from your workspace. The result is that Azure AD Authentication is unavailable until your workspace region is available again.
+>
+> Also, the greater the distance between your cluster's region and your workspace region, the longer it will take to fetch a token.
+
+## Next steps
+
+For more information on authenticating to a deployed model, see [Create a client for a model deployed as a web service](how-to-consume-web-service.md).
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
+
+ Title: Create client for model deployed as web service
+
+description: Learn how to call a web service endpoint that was generated when a model was deployed from Azure Machine Learning.
++++++ Last updated : 08/15/2022+
+ms.devlang: csharp, golang, java, python
+
+#Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
++
+# Consume an Azure Machine Learning model deployed as a web service
++
+Deploying an Azure Machine Learning model as a web service creates a REST API endpoint. You can send data to this endpoint and receive the prediction returned by the model. In this document, learn how to create clients for the web service by using C#, Go, Java, and Python.
+
+You create a web service when you deploy a model to your local environment, Azure Container Instances, Azure Kubernetes Service, or field-programmable gate arrays (FPGA). You retrieve the URI used to access the web service by using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). If authentication is enabled, you can also use the SDK to get the authentication keys or tokens.
+
+The general workflow for creating a client that uses a machine learning web service is:
+
+1. Use the SDK to get the connection information.
+1. Determine the type of request data used by the model.
+1. Create an application that calls the web service.
+
+> [!TIP]
+> The examples in this document are manually created without the use of OpenAPI (Swagger) specifications. If you've enabled an OpenAPI specification for your deployment, you can use tools such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) to create client libraries for your service.
+
+## Connection information
+
+> [!NOTE]
+> Use the Azure Machine Learning SDK to get the web service information. This is a Python SDK. You can use any language to create a client for the service.
+
+The [azureml.core.Webservice](/python/api/azureml-core/azureml.core.webservice%28class%29) class provides the information you need to create a client. The following `Webservice` properties are useful for creating a client application:
+
+* `auth_enabled` - If key authentication is enabled, `True`; otherwise, `False`.
+* `token_auth_enabled` - If token authentication is enabled, `True`; otherwise, `False`.
+* `scoring_uri` - The REST API address.
+* `swagger_uri` - The address of the OpenAPI specification. This URI is available if you enabled automatic schema generation. For more information, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
+
+There are several ways to retrieve this information for deployed web
+
+# [Python](#tab/python)
++
+* When you deploy a model, a `Webservice` object is returned with information about the service:
+
+ ```python
+ service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config)
+ service.wait_for_deployment(show_output = True)
+ print(service.scoring_uri)
+ print(service.swagger_uri)
+ ```
+
+* You can use `Webservice.list` to retrieve a list of deployed web services for models in your workspace. You can add filters to narrow the list of information returned. For more information about what can be filtered on, see the [Webservice.list](/python/api/azureml-core/azureml.core.webservice.webservice.webservice) reference documentation.
+
+ ```python
+ services = Webservice.list(ws)
+ print(services[0].scoring_uri)
+ print(services[0].swagger_uri)
+ ```
+
+* If you know the name of the deployed service, you can create a new instance of `Webservice`, and provide the workspace and service name as parameters. The new object contains information about the deployed service.
+
+ ```python
+ service = Webservice(workspace=ws, name='myservice')
+ print(service.scoring_uri)
+ print(service.swagger_uri)
+ ```
+
+# [Azure CLI](#tab/azure-cli)
++
+If you know the name of the deployed service, use the [az ml service show](/cli/azure/ml(v1)/service#az-ml-service-show) command:
+
+```azurecli
+az ml service show -n <service-name>
+```
+
+# [Portal](#tab/azure-portal)
+
+From Azure Machine Learning studio, select __Endpoints__, __Real-time endpoints__, and then the endpoint name. In details for the endpoint, the __REST endpoint__ field contains the scoring URI. The __Swagger URI__ contains the swagger URI.
+++
+The following table shows what these URIs look like:
+
+| URI type | Example |
+| -- | -- |
+| Scoring URI | `http://104.214.29.152:80/api/v1/service/<service-name>/score` |
+| Swagger URI | `http://104.214.29.152/api/v1/service/<service-name>/swagger.json` |
+
+> [!TIP]
+> The IP address will be different for your deployment. Each AKS cluster will have its own IP address that is shared by deployments to that cluster.
+
+### Secured web service
+
+If you secured the deployed web service using a TLS/SSL certificate, you can use [HTTPS](https://en.wikipedia.org/wiki/HTTPS) to connect to the service using the scoring or swagger URI. HTTPS helps secure communications between a client and a web service by encrypting communications between the two. Encryption uses [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). TLS is sometimes still referred to as *Secure Sockets Layer* (SSL), which was the predecessor of TLS.
+
+> [!IMPORTANT]
+> Web services deployed by Azure Machine Learning only support TLS version 1.2. When creating a client application, make sure that it supports this version.
+
+For more information, see [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md).
+
+### Authentication for services
+
+Azure Machine Learning provides two ways to control access to your web services.
+
+|Authentication Method|ACI|AKS|
+||||
+|Key|Disabled by default| Enabled by default|
+|Token| Not Available| Disabled by default |
+
+When sending a request to a service that is secured with a key or token, use the __Authorization__ header to pass the key or token. The key or token must be formatted as `Bearer <key-or-token>`, where `<key-or-token>` is your key or token value.
+
+The primary difference between keys and tokens is that **keys are static and can be regenerated manually**, and **tokens need to be refreshed upon expiration**. Key-based auth is supported for Azure Container Instance and Azure Kubernetes Service deployed web-services, and token-based auth is **only** available for Azure Kubernetes Service deployments. For more information on configuring authentication, see [Configure authentication for models deployed as web services](how-to-authenticate-web-service.md).
++
+#### Authentication with keys
+
+When you enable authentication for a deployment, you automatically create authentication keys.
+
+* Authentication is enabled by default when you're deploying to Azure Kubernetes Service.
+* Authentication is disabled by default when you're deploying to Azure Container Instances.
+
+To control authentication, use the `auth_enabled` parameter when you're creating or updating a deployment.
+
+If authentication is enabled, you can use the `get_keys` method to retrieve a primary and secondary authentication key:
+
+```python
+primary, secondary = service.get_keys()
+print(primary)
+```
+
+> [!IMPORTANT]
+> If you need to regenerate a key, use [`service.regen_key`](/python/api/azureml-core/azureml.core.webservice%28class%29).
+
+#### Authentication with tokens
+
+When you enable token authentication for a web service, a user must provide an Azure Machine Learning JWT token to the web service to access it.
+
+* Token authentication is disabled by default when you're deploying to Azure Kubernetes Service.
+* Token authentication isn't supported when you're deploying to Azure Container Instances.
+
+To control token authentication, use the `token_auth_enabled` parameter when you're creating or updating a deployment.
+
+If token authentication is enabled, you can use the `get_token` method to retrieve a bearer token and that tokens expiration time:
+
+```python
+token, refresh_by = service.get_token()
+print(token)
+```
+
+If you have the [Azure CLI and the machine learning extension](reference-azure-machine-learning-cli.md), you can use the following command to get a token:
++
+```azurecli
+az ml service get-access-token -n <service-name>
+```
+
+> [!IMPORTANT]
+> Currently the only way to retrieve the token is by using the Azure Machine Learning SDK or the Azure CLI machine learning extension.
+
+You'll need to request a new token after the token's `refresh_by` time.
+
+## Request data
+
+The REST API expects the body of the request to be a JSON document with the following structure:
+
+```json
+{
+ "data":
+ [
+ <model-specific-data-structure>
+ ]
+}
+```
+
+> [!IMPORTANT]
+> The structure of the data needs to match what the scoring script and model in the service expect. The scoring script might modify the data before passing it to the model.
+
+### Binary data
+
+For information on how to enable support for binary data in your service, see [Binary data](how-to-deploy-advanced-entry-script.md#binary-data).
+
+> [!TIP]
+> Enabling support for binary data happens in the score.py file used by the deployed model. From the client, use the HTTP functionality of your programming language. For example, the following snippet sends the contents of a JPG file to a web service:
+>
+> ```python
+> import requests
+> # Load image data
+> data = open('example.jpg', 'rb').read()
+> # Post raw data to scoring URI
+> res = request.post(url='<scoring-uri>', data=data, headers={'Content-Type': 'application/> octet-stream'})
+> ```
+
+### Cross-origin resource sharing (CORS)
+
+For information on enabling CORS support in your service, see [Cross-origin resource sharing](how-to-deploy-advanced-entry-script.md#cors).
+
+## Call the service (C#)
+
+This example demonstrates how to use C# to call the web service created from the [Train within notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) example:
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Net.Http;
+using System.Net.Http.Headers;
+using Newtonsoft.Json;
+
+namespace MLWebServiceClient
+{
+ // The data structure expected by the service
+ internal class InputData
+ {
+ [JsonProperty("data")]
+ // The service used by this example expects an array containing
+ // one or more arrays of doubles
+ internal double[,] data;
+ }
+ class Program
+ {
+ static void Main(string[] args)
+ {
+ // Set the scoring URI and authentication key or token
+ string scoringUri = "<your web service URI>";
+ string authKey = "<your key or token>";
+
+ // Set the data to be sent to the service.
+ // In this case, we are sending two sets of data to be scored.
+ InputData payload = new InputData();
+ payload.data = new double[,] {
+ {
+ 0.0199132141783263,
+ 0.0506801187398187,
+ 0.104808689473925,
+ 0.0700725447072635,
+ -0.0359677812752396,
+ -0.0266789028311707,
+ -0.0249926566315915,
+ -0.00259226199818282,
+ 0.00371173823343597,
+ 0.0403433716478807
+ },
+ {
+ -0.0127796318808497,
+ -0.044641636506989,
+ 0.0606183944448076,
+ 0.0528581912385822,
+ 0.0479653430750293,
+ 0.0293746718291555,
+ -0.0176293810234174,
+ 0.0343088588777263,
+ 0.0702112981933102,
+ 0.00720651632920303
+ }
+ };
+
+ // Create the HTTP client
+ HttpClient client = new HttpClient();
+ // Set the auth header. Only needed if the web service requires authentication.
+ client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authKey);
+
+ // Make the request
+ try {
+ var request = new HttpRequestMessage(HttpMethod.Post, new Uri(scoringUri));
+ request.Content = new StringContent(JsonConvert.SerializeObject(payload));
+ request.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
+ var response = client.SendAsync(request).Result;
+ // Display the response from the web service
+ Console.WriteLine(response.Content.ReadAsStringAsync().Result);
+ }
+ catch (Exception e)
+ {
+ Console.Out.WriteLine(e.Message);
+ }
+ }
+ }
+}
+```
+
+The results returned are similar to the following JSON document:
+
+```json
+[217.67978776218715, 224.78937091757172]
+```
+
+## Call the service (Go)
+
+This example demonstrates how to use Go to call the web service created from the [Train within notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) example:
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+)
+
+// Features for this model are an array of decimal values
+type Features []float64
+
+// The web service input can accept multiple sets of values for scoring
+type InputData struct {
+ Data []Features `json:"data",omitempty`
+}
+
+// Define some example data
+var exampleData = []Features{
+ []float64{
+ 0.0199132141783263,
+ 0.0506801187398187,
+ 0.104808689473925,
+ 0.0700725447072635,
+ -0.0359677812752396,
+ -0.0266789028311707,
+ -0.0249926566315915,
+ -0.00259226199818282,
+ 0.00371173823343597,
+ 0.0403433716478807,
+ },
+ []float64{
+ -0.0127796318808497,
+ -0.044641636506989,
+ 0.0606183944448076,
+ 0.0528581912385822,
+ 0.0479653430750293,
+ 0.0293746718291555,
+ -0.0176293810234174,
+ 0.0343088588777263,
+ 0.0702112981933102,
+ 0.00720651632920303,
+ },
+}
+
+// Set to the URI for your service
+var serviceUri string = "<your web service URI>"
+// Set to the authentication key or token (if any) for your service
+var authKey string = "<your key or token>"
+
+func main() {
+ // Create the input data from example data
+ jsonData := InputData{
+ Data: exampleData,
+ }
+ // Create JSON from it and create the body for the HTTP request
+ jsonValue, _ := json.Marshal(jsonData)
+ body := bytes.NewBuffer(jsonValue)
+
+ // Create the HTTP request
+ client := &http.Client{}
+ request, err := http.NewRequest("POST", serviceUri, body)
+ request.Header.Add("Content-Type", "application/json")
+
+ // These next two are only needed if using an authentication key
+ bearer := fmt.Sprintf("Bearer %v", authKey)
+ request.Header.Add("Authorization", bearer)
+
+ // Send the request to the web service
+ resp, err := client.Do(request)
+ if err != nil {
+ fmt.Println("Failure: ", err)
+ }
+
+ // Display the response received
+ respBody, _ := ioutil.ReadAll(resp.Body)
+ fmt.Println(string(respBody))
+}
+```
+
+The results returned are similar to the following JSON document:
+
+```json
+[217.67978776218715, 224.78937091757172]
+```
+
+## Call the service (Java)
+
+This example demonstrates how to use Java to call the web service created from the [Train within notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) example:
+
+```java
+import java.io.IOException;
+import org.apache.http.client.fluent.*;
+import org.apache.http.entity.ContentType;
+import org.json.simple.JSONArray;
+import org.json.simple.JSONObject;
+
+public class App {
+ // Handle making the request
+ public static void sendRequest(String data) {
+ // Replace with the scoring_uri of your service
+ String uri = "<your web service URI>";
+ // If using authentication, replace with the auth key or token
+ String key = "<your key or token>";
+ try {
+ // Create the request
+ Content content = Request.Post(uri)
+ .addHeader("Content-Type", "application/json")
+ // Only needed if using authentication
+ .addHeader("Authorization", "Bearer " + key)
+ // Set the JSON data as the body
+ .bodyString(data, ContentType.APPLICATION_JSON)
+ // Make the request and display the response.
+ .execute().returnContent();
+ System.out.println(content);
+ }
+ catch (IOException e) {
+ System.out.println(e);
+ }
+ }
+ public static void main(String[] args) {
+ // Create the data to send to the service
+ JSONObject obj = new JSONObject();
+ // In this case, it's an array of arrays
+ JSONArray dataItems = new JSONArray();
+ // Inner array has 10 elements
+ JSONArray item1 = new JSONArray();
+ item1.add(0.0199132141783263);
+ item1.add(0.0506801187398187);
+ item1.add(0.104808689473925);
+ item1.add(0.0700725447072635);
+ item1.add(-0.0359677812752396);
+ item1.add(-0.0266789028311707);
+ item1.add(-0.0249926566315915);
+ item1.add(-0.00259226199818282);
+ item1.add(0.00371173823343597);
+ item1.add(0.0403433716478807);
+ // Add the first set of data to be scored
+ dataItems.add(item1);
+ // Create and add the second set
+ JSONArray item2 = new JSONArray();
+ item2.add(-0.0127796318808497);
+ item2.add(-0.044641636506989);
+ item2.add(0.0606183944448076);
+ item2.add(0.0528581912385822);
+ item2.add(0.0479653430750293);
+ item2.add(0.0293746718291555);
+ item2.add(-0.0176293810234174);
+ item2.add(0.0343088588777263);
+ item2.add(0.0702112981933102);
+ item2.add(0.00720651632920303);
+ dataItems.add(item2);
+ obj.put("data", dataItems);
+
+ // Make the request using the JSON document string
+ sendRequest(obj.toJSONString());
+ }
+}
+```
+
+The results returned are similar to the following JSON document:
+
+```json
+[217.67978776218715, 224.78937091757172]
+```
+
+## Call the service (Python)
+
+This example demonstrates how to use Python to call the web service created from the [Train within notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/notebook_runner/training_notebook.ipynb) example:
+
+```python
+import requests
+import json
+
+# URL for the web service
+scoring_uri = '<your web service URI>'
+# If the service is authenticated, set the key or token
+key = '<your key or token>'
+
+# Two sets of data to score, so we get two results back
+data = {"data":
+ [
+ [
+ 0.0199132141783263,
+ 0.0506801187398187,
+ 0.104808689473925,
+ 0.0700725447072635,
+ -0.0359677812752396,
+ -0.0266789028311707,
+ -0.0249926566315915,
+ -0.00259226199818282,
+ 0.00371173823343597,
+ 0.0403433716478807
+ ],
+ [
+ -0.0127796318808497,
+ -0.044641636506989,
+ 0.0606183944448076,
+ 0.0528581912385822,
+ 0.0479653430750293,
+ 0.0293746718291555,
+ -0.0176293810234174,
+ 0.0343088588777263,
+ 0.0702112981933102,
+ 0.00720651632920303]
+ ]
+ }
+# Convert to JSON string
+input_data = json.dumps(data)
+
+# Set the content type
+headers = {'Content-Type': 'application/json'}
+# If authentication is enabled, set the authorization header
+headers['Authorization'] = f'Bearer {key}'
+
+# Make the request and display the response
+resp = requests.post(scoring_uri, input_data, headers=headers)
+print(resp.text)
+```
+
+The results returned are similar to the following JSON document:
+
+```JSON
+[217.67978776218715, 224.78937091757172]
+```
++
+## Web service schema (OpenAPI specification)
+
+If you used automatic schema generation with your deployment, you can get the address of the OpenAPI specification for the service by using the [swagger_uri property](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#swagger-uri). (For example, `print(service.swagger_uri)`.) Use a GET request or open the URI in a browser to retrieve the specification.
+
+The following JSON document is an example of a schema (OpenAPI specification) generated for a deployment:
+
+```json
+{
+ "swagger": "2.0",
+ "info": {
+ "title": "myservice",
+ "description": "API specification for Azure Machine Learning myservice",
+ "version": "1.0"
+ },
+ "schemes": [
+ "https"
+ ],
+ "consumes": [
+ "application/json"
+ ],
+ "produces": [
+ "application/json"
+ ],
+ "securityDefinitions": {
+ "Bearer": {
+ "type": "apiKey",
+ "name": "Authorization",
+ "in": "header",
+ "description": "For example: Bearer abc123"
+ }
+ },
+ "paths": {
+ "/": {
+ "get": {
+ "operationId": "ServiceHealthCheck",
+ "description": "Simple health check endpoint to ensure the service is up at any given point.",
+ "responses": {
+ "200": {
+ "description": "If service is up and running, this response will be returned with the content 'Healthy'",
+ "schema": {
+ "type": "string"
+ },
+ "examples": {
+ "application/json": "Healthy"
+ }
+ },
+ "default": {
+ "description": "The service failed to execute due to an error.",
+ "schema": {
+ "$ref": "#/definitions/ErrorResponse"
+ }
+ }
+ }
+ }
+ },
+ "/score": {
+ "post": {
+ "operationId": "RunMLService",
+ "description": "Run web service's model and get the prediction output",
+ "security": [
+ {
+ "Bearer": []
+ }
+ ],
+ "parameters": [
+ {
+ "name": "serviceInputPayload",
+ "in": "body",
+ "description": "The input payload for executing the real-time machine learning service.",
+ "schema": {
+ "$ref": "#/definitions/ServiceInput"
+ }
+ }
+ ],
+ "responses": {
+ "200": {
+ "description": "The service processed the input correctly and provided a result prediction, if applicable.",
+ "schema": {
+ "$ref": "#/definitions/ServiceOutput"
+ }
+ },
+ "default": {
+ "description": "The service failed to execute due to an error.",
+ "schema": {
+ "$ref": "#/definitions/ErrorResponse"
+ }
+ }
+ }
+ }
+ }
+ },
+ "definitions": {
+ "ServiceInput": {
+ "type": "object",
+ "properties": {
+ "data": {
+ "type": "array",
+ "items": {
+ "type": "array",
+ "items": {
+ "type": "integer",
+ "format": "int64"
+ }
+ }
+ }
+ },
+ "example": {
+ "data": [
+ [ 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 ]
+ ]
+ }
+ },
+ "ServiceOutput": {
+ "type": "array",
+ "items": {
+ "type": "number",
+ "format": "double"
+ },
+ "example": [
+ 3726.995
+ ]
+ },
+ "ErrorResponse": {
+ "type": "object",
+ "properties": {
+ "status_code": {
+ "type": "integer",
+ "format": "int32"
+ },
+ "message": {
+ "type": "string"
+ }
+ }
+ }
+ }
+}
+```
+
+For more information, see [OpenAPI specification](https://swagger.io/specification/).
+
+For a utility that can create client libraries from the specification, see [swagger-codegen](https://github.com/swagger-api/swagger-codegen).
++
+> [!TIP]
+> You can retrieve the schema JSON document after you deploy the service. Use the [swagger_uri property](/python/api/azureml-core/azureml.core.webservice.local.localwebservice#swagger-uri) from the deployed web service (for example, `service.swagger_uri`) to get the URI to the local web service's Swagger file.
+
+## Consume the service from Power BI
+
+Power BI supports consumption of Azure Machine Learning web services to enrich the data in Power BI with predictions.
+
+To generate a web service that's supported for consumption in Power BI, the schema must support the format that's required by Power BI. [Learn how to create a Power BI-supported schema](./how-to-deploy-advanced-entry-script.md#power-bi-compatible-endpoint).
+
+Once the web service is deployed, it's consumable from Power BI dataflows. [Learn how to consume an Azure Machine Learning web service from Power BI](/power-bi/service-machine-learning-integration).
+
+## Next steps
+
+To view a reference architecture for real-time scoring of Python and deep learning models, go to the [Azure architecture center](/azure/architecture/reference-architectures/ai/realtime-scoring-python).
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
+
+ Title: Author entry script for advanced scenarios
+
+description: Learn how to write Azure Machine Learning entry scripts for pre- and post-processing during deployment.
++++ Last updated : 08/15/2022+++++
+# Advanced entry script authoring
++
+This article shows how to write entry scripts for specialized use cases.
+
+## Prerequisites
+
+This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [How to deploy and where](how-to-deploy-and-where.md).
+
+## Automatically generate a Swagger schema
+
+To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI](https://swagger.io/docs/specification/about/) (Swagger) specification for the web service during deployment.
+
+> [!WARNING]
+> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
+
+These types are currently supported:
+
+* `pandas`
+* `numpy`
+* `pyspark`
+* Standard Python object
+
+To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [https://github.com/Azure/InferenceSchema](https://github.com/Azure/InferenceSchema). In order to generate conforming swagger for automated web service consumption, scoring script run() function must have API shape of:
+* A first parameter of type "StandardPythonParameterType", named **Inputs** and nested.
+* An optional second parameter of type "StandardPythonParameterType", named **GlobalParameters**.
+* Return a dictionary of type "StandardPythonParameterType" named **Results** and nested.
+
+Define the input and output sample formats in the `input_sample` and `output_sample` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run()` function. The following scikit-learn example uses schema generation.
+++
+## Power BI compatible endpoint
+
+The following example demonstrates how to define API shape according to above instruction. This method is supported for consuming the deployed web service from Power BI. ([Learn more about how to consume the web service from Power BI](/power-bi/service-machine-learning-integration).)
+
+```python
+import json
+import pickle
+import numpy as np
+import pandas as pd
+import azureml.train.automl
+from sklearn.externals import joblib
+from sklearn.linear_model import Ridge
+
+from inference_schema.schema_decorators import input_schema, output_schema
+from inference_schema.parameter_types.standard_py_parameter_type import StandardPythonParameterType
+from inference_schema.parameter_types.numpy_parameter_type import NumpyParameterType
+from inference_schema.parameter_types.pandas_parameter_type import PandasParameterType
++
+def init():
+ global model
+ # Replace filename if needed.
+ model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
+ # Deserialize the model file back into a sklearn model.
+ model = joblib.load(model_path)
++
+# providing 3 sample inputs for schema generation
+numpy_sample_input = NumpyParameterType(np.array([[1,2,3,4,5,6,7,8,9,10],[10,9,8,7,6,5,4,3,2,1]],dtype='float64'))
+pandas_sample_input = PandasParameterType(pd.DataFrame({'name': ['Sarah', 'John'], 'age': [25, 26]}))
+standard_sample_input = StandardPythonParameterType(0.0)
+
+# This is a nested input sample, any item wrapped by `ParameterType` will be described by schema
+sample_input = StandardPythonParameterType({'input1': numpy_sample_input,
+ 'input2': pandas_sample_input,
+ 'input3': standard_sample_input})
+
+sample_global_parameters = StandardPythonParameterType(1.0) # this is optional
+sample_output = StandardPythonParameterType([1.0, 1.0])
+outputs = StandardPythonParameterType({'Results':sample_output}) # 'Results' is case sensitive
+
+@input_schema('Inputs', sample_input)
+# 'Inputs' is case sensitive
+
+@input_schema('GlobalParameters', sample_global_parameters)
+# this is optional, 'GlobalParameters' is case sensitive
+
+@output_schema(outputs)
+
+def run(Inputs, GlobalParameters):
+ # the parameters here have to match those in decorator, both 'Inputs' and
+ # 'GlobalParameters' here are case sensitive
+ try:
+ data = Inputs['input1']
+ # data will be convert to target format
+ assert isinstance(data, np.ndarray)
+ result = model.predict(data)
+ return result.tolist()
+ except Exception as e:
+ error = str(e)
+ return error
+```
+
+> [!TIP]
+> The return value from the script can be any Python object that is serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
+>
+> ```python
+> output_sample = pd.DataFrame(data=[{"a1": 5, "a2": 6}])
+> @output_schema(PandasParameterType(output_sample))
+> ...
+> result = model.predict(data)
+> return result
+> ```
+
+## <a id="binary-data"></a> Binary (that is, image) data
+
+If your model accepts binary data, like an image, you must modify the `score.py` file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest` class in your entry script and add the `@rawhttp` decorator to the `run()` function.
+
+Here's an example of a `score.py` that accepts binary data:
+
+```python
+from azureml.contrib.services.aml_request import AMLRequest, rawhttp
+from azureml.contrib.services.aml_response import AMLResponse
+from PIL import Image
+import json
++
+def init():
+ print("This is init()")
+
+
+@rawhttp
+def run(request):
+ print("This is run()")
+
+ if request.method == 'GET':
+ # For this example, just return the URL for GETs.
+ respBody = str.encode(request.full_path)
+ return AMLResponse(respBody, 200)
+ elif request.method == 'POST':
+ file_bytes = request.files["image"]
+ image = Image.open(file_bytes).convert('RGB')
+ # For a real-world solution, you would load the data from reqBody
+ # and send it to the model. Then return the response.
+
+ # For demonstration purposes, this example just returns the size of the image as the response..
+ return AMLResponse(json.dumps(image.size), 200)
+ else:
+ return AMLResponse("bad request", 500)
+```
++
+> [!IMPORTANT]
+> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
+>
+> If you need to test this in your local development environment, you can install the components by using the following command:
+>
+> ```shell
+> pip install azureml-contrib-services
+> ```
+
+The `AMLRequest` class only allows you to access the raw posted data in the score.py, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
+
+```python
+import requests
+
+uri = service.scoring_uri
+image_path = 'test.jpg'
+files = {'image': open(image_path, 'rb').read()}
+response = requests.post(uri, files=files)
+
+print(response.json)
+```
+
+<a id="cors"></a>
+
+## Cross-origin resource sharing (CORS)
+
+Cross-origin resource sharing is a way to allow resources on a webpage to be requested from another domain. CORS works via HTTP headers sent with the client request and returned with the service response. For more information on CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) in Wikipedia.
+
+To configure your model deployment to support CORS, use the `AMLResponse` class in your entry script. This class allows you to set the headers on the response object.
+
+The following example sets the `Access-Control-Allow-Origin` header for the response from the entry script:
+
+```python
+from azureml.contrib.services.aml_request import AMLRequest, rawhttp
+from azureml.contrib.services.aml_response import AMLResponse
++
+def init():
+ print("This is init()")
+
+@rawhttp
+def run(request):
+ print("This is run()")
+ print("Request: [{0}]".format(request))
+ if request.method == 'GET':
+ # For this example, just return the URL for GET.
+ # For a real-world solution, you would load the data from URL params or headers
+ # and send it to the model. Then return the response.
+ respBody = str.encode(request.full_path)
+ resp = AMLResponse(respBody, 200)
+ resp.headers["Allow"] = "OPTIONS, GET, POST"
+ resp.headers["Access-Control-Allow-Methods"] = "OPTIONS, GET, POST"
+ resp.headers['Access-Control-Allow-Origin'] = "http://www.example.com"
+ resp.headers['Access-Control-Allow-Headers'] = "*"
+ return resp
+ elif request.method == 'POST':
+ reqBody = request.get_data(False)
+ # For a real-world solution, you would load the data from reqBody
+ # and send it to the model. Then return the response.
+ resp = AMLResponse(reqBody, 200)
+ resp.headers["Allow"] = "OPTIONS, GET, POST"
+ resp.headers["Access-Control-Allow-Methods"] = "OPTIONS, GET, POST"
+ resp.headers['Access-Control-Allow-Origin'] = "http://www.example.com"
+ resp.headers['Access-Control-Allow-Headers'] = "*"
+ return resp
+ elif request.method == 'OPTIONS':
+ resp = AMLResponse("", 200)
+ resp.headers["Allow"] = "OPTIONS, GET, POST"
+ resp.headers["Access-Control-Allow-Methods"] = "OPTIONS, GET, POST"
+ resp.headers['Access-Control-Allow-Origin'] = "http://www.example.com"
+ resp.headers['Access-Control-Allow-Headers'] = "*"
+ return resp
+ else:
+ return AMLResponse("bad request", 400)
+```
+
+> [!IMPORTANT]
+> The `AMLResponse` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
+>
+> If you need to test this in your local development environment, you can install the components by using the following command:
+>
+> ```shell
+> pip install azureml-contrib-services
+> ```
++
+> [!WARNING]
+> Azure Machine Learning will route only POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
+>
++
+## Load registered models
+
+There are two ways to locate models in your entry script:
+* `AZUREML_MODEL_DIR`: An environment variable containing the path to the model location.
+* `Model.get_model_path`: An API that returns the path to model file using the registered model name.
+
+#### AZUREML_MODEL_DIR
+
+AZUREML_MODEL_DIR is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
+
+The following table describes the value of AZUREML_MODEL_DIR depending on the number of models deployed:
+
+| Deployment | Environment variable value |
+| -- | -- |
+| Single model | The path to the folder containing the model. |
+| Multiple models | The path to the folder containing all models. Models are located by name and version in this folder (`$MODEL_NAME/$VERSION`) |
+
+During model registration and deployment, Models are placed in the AZUREML_MODEL_DIR path, and their original filenames are preserved.
+
+To get the path to a model file in your entry script, combine the environment variable with the file path you're looking for.
+
+**Single model example**
+```python
+# Example when the model is a file
+model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
+
+# Example when the model is a folder containing a file
+file_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_model_folder', 'sklearn_regression_model.pkl')
+```
+
+**Multiple model example**
+
+In this scenario, two models are registered with the workspace:
+
+* `my_first_model`: Contains one file (`my_first_model.pkl`) and there's only one version (`1`).
+* `my_second_model`: Contains one file (`my_second_model.pkl`) and there are two versions; `1` and `2`.
+
+When the service was deployed, both models are provided in the deploy operation:
+
+```python
+first_model = Model(ws, name="my_first_model", version=1)
+second_model = Model(ws, name="my_second_model", version=2)
+service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
+```
+
+In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located.
+In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
+
+In this example, the paths would be `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl` and `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
++
+```python
+# Example when the model is a file, and the deployment contains multiple models
+first_model_name = 'my_first_model'
+first_model_version = '1'
+first_model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), first_model_name, first_model_version, 'my_first_model.pkl')
+second_model_name = 'my_second_model'
+second_model_version = '2'
+second_model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), second_model_name, second_model_version, 'my_second_model.pkl')
+```
+
+### get_model_path
+
+When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-noneworkspace-none-) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
+
+When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
+
+## Framework-specific examples
+
+More entry script examples for specific machine learning use cases can be found below:
+
+* [PyTorch](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch)
+* [TensorFlow](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/tensorflow)
+* [Keras](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb)
+* [AutoML](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features)
+* [ONNX](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/)
+
+## Next steps
+
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
+* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Create client applications to consume web services](how-to-consume-web-service.md)
+* [Update web service](how-to-deploy-update-web-service.md)
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
+* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
--++ Last updated 07/28/2022
When you deploy remotely, you may have key authentication enabled. The example b
-See the article on [client applications to consume web services](../how-to-consume-web-service.md) for more example clients in other languages.
+See the article on [client applications to consume web services](how-to-consume-web-service.md) for more example clients in other languages.
[!INCLUDE [Email Notification Include](../../../includes/machine-learning-email-notifications.md)]
For more information, see the documentation for [WebService.delete()](/python/ap
## Next steps
-* [Troubleshoot a failed deployment](../how-to-troubleshoot-deployment.md)
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
* [Update web service](../how-to-deploy-update-web-service.md) * [One click deployment for automated ML runs in the Azure Machine Learning studio](../how-to-use-automated-ml-for-ml-models.md#deploy-your-model) * [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md
Learn how to use Azure Machine Learning to deploy a model as a web service on Az
For information on quota and region availability for ACI, see [Quotas and region availability for Azure Container Instances](../../container-instances/container-instances-quotas.md) article. > [!IMPORTANT]
-> It is highly advised to debug locally before deploying to the web service, for more information see [Debug Locally](../how-to-troubleshoot-deployment-local.md)
+> It is highly advised to debug locally before deploying to the web service, for more information see [Debug Locally](how-to-troubleshoot-deployment-local.md)
> > You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
See [how to manage resources in VS Code](../how-to-manage-resources-vscode.md).
## Next steps * [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
-* [Deployment troubleshooting](../how-to-troubleshoot-deployment.md)
-* [Update the web service](../how-to-deploy-update-web-service.md)
-* [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
-* [Consume a ML Model deployed as a web service](../how-to-consume-web-service.md)
+* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
+* [Update the web service](how-to-deploy-update-web-service.md)
+* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
+* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
-* [Collect data for models in production](../how-to-enable-data-collection.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-kubernetes-service.md
Previously updated : 10/21/2021 Last updated : 08/15/2022 # Deploy a model to an Azure Kubernetes Service cluster with v1
Learn how to use Azure Machine Learning to deploy a model as a web service on Az
When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is __connected to your workspace__. For information on connecting an AKS cluster to your workspace, see [Create and attach an Azure Kubernetes Service cluster](../how-to-create-attach-kubernetes.md). > [!IMPORTANT]
-> We recommend that you debug locally before deploying to the web service. For more information, see [Debug Locally](../how-to-troubleshoot-deployment-local.md)
+> We recommend that you debug locally before deploying to the web service. For more information, see [Debug Locally](how-to-troubleshoot-deployment-local.md)
> > You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local)
When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is
- An Azure Kubernetes Service cluster connected to your workspace. For more information, see [Create and attach an Azure Kubernetes Service cluster](../how-to-create-attach-kubernetes.md).
- - If you want to deploy models to GPU nodes or FPGA nodes (or any specific SKU), then you must create a cluster with the specific SKU. There is no support for creating a secondary node pool in an existing cluster and deploying models in the secondary node pool.
+ - If you want to deploy models to GPU nodes or FPGA nodes (or any specific SKU), then you must create a cluster with the specific SKU. There's no support for creating a secondary node pool in an existing cluster and deploying models in the secondary node pool.
## Understand the deployment processes
In Azure Machine Learning, "deployment" is used in the more general sense of mak
- Custom docker steps (see [Deploy a model using a custom Docker base image](../how-to-deploy-custom-container.md)) - The conda definition YAML (see [Create & use software environments in Azure Machine Learning](../how-to-use-environments.md)) 1. The system uses this hash as the key in a lookup of the workspace Azure Container Registry (ACR)
- 1. If it is not found, it looks for a match in the global ACR
- 1. If it is not found, the system builds a new image (which will be cached and pushed to the workspace ACR)
+ 1. If it isn't found, it looks for a match in the global ACR
+ 1. If it isn't found, the system builds a new image (which will be cached and pushed to the workspace ACR)
1. Downloading your zipped project file to temporary storage on the compute node 1. Unzipping the project file 1. The compute node executing `python <entry script> <arguments>`
In Azure Machine Learning, "deployment" is used in the more general sense of mak
### Azure ML router
-The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. Scaling of azureml-fe is based on the AKS cluster purpose and size (number of nodes). The cluster purpose and nodes are configured when you [create or attach an AKS cluster](../how-to-create-attach-kubernetes.md). There is one azureml-fe service per cluster, which may be running on multiple pods.
+The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. Scaling of azureml-fe is based on the AKS cluster purpose and size (number of nodes). The cluster purpose and nodes are configured when you [create or attach an AKS cluster](../how-to-create-attach-kubernetes.md). There's one azureml-fe service per cluster, which may be running on multiple pods.
> [!IMPORTANT] > When using a cluster configured as __dev-test__, the self-scaler is **disabled**. Even for FastProd/DenseProd clusters, Self-Scaler is only enabled when telemetry shows that it's needed.
Azureml-fe scales both up (vertically) to use more cores, and out (horizontally)
When scaling down and in, CPU usage is used. If the CPU usage threshold is met, the front end will first be scaled down. If the CPU usage drops to the scale-in threshold, a scale-in operation happens. Scaling up and out will only occur if there are enough cluster resources available.
-When scale-up or scale-down, azureml-fe pods will be restarted to apply the cpu/memory changes. Inferencing requests are not affected by the restarts.
+When scale-up or scale-down, azureml-fe pods will be restarted to apply the cpu/memory changes. Inferencing requests aren't affected by the restarts.
<a id="connectivity"></a>
DNS resolution within an existing VNet is under your control. For example, a fir
| `<account>.blob.core.windows.net` | Azure Storage Account (blob storage) | | `api.azureml.ms` | Azure Active Directory (Azure AD) authentication | | `ingest-vienna<region>.kusto.windows.net` | Kusto endpoint for uploading telemetry |
-| `<leaf-domain-label + auto-generated suffix>.<region>.cloudapp.azure.com` | Endpoint domain name, if you autogenerated by Azure Machine Learning. If you used a custom domain name, you do not need this entry. |
+| `<leaf-domain-label + auto-generated suffix>.<region>.cloudapp.azure.com` | Endpoint domain name, if you autogenerated by Azure Machine Learning. If you used a custom domain name, you don't need this entry. |
### Connectivity requirements in chronological order: from cluster creation to model deployment
In the process of AKS create or attach, Azure ML router (azureml-fe) is deployed
Right after azureml-fe is deployed, it will attempt to start and this requires to: * Resolve DNS for AKS API server
-* Query AKS API server to discover other instances of itself (it is a multi-pod service)
+* Query AKS API server to discover other instances of itself (it's a multi-pod service)
* Connect to other instances of itself Once azureml-fe is started, it requires the following connectivity to function properly:
At model deployment time, for a successful model deployment AKS node should be a
* Resolve DNS for Azure BLOBs where model is stored * Download models from Azure BLOBs
-After the model is deployed and service starts, azureml-fe will automatically discover it using AKS API and will be ready to route request to it. It must be able to communicate to model PODs.
+After the model is deployed and service starts, azureml-fe will automatically discover it using AKS API, and will be ready to route request to it. It must be able to communicate to model PODs.
>[!Note] >If the deployed model requires any connectivity (e.g. querying external database or other REST service, downloading a BLOB etc), then both DNS resolution and outbound communication for these services should be enabled.
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
autoscale_max_replicas=4) ```
-Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `autoscale_target_utilization`, then more replicas are created. If it is lower, then replicas are reduced. By default, the target utilization is 70%.
+Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `autoscale_target_utilization`, then more replicas are created. If it's lower, then replicas are reduced. By default, the target utilization is 70%.
Decisions to add replicas are eager and fast (around 1 second). Decisions to remove replicas are conservative (around 1 minute).
To __disable__ authentication, set the `auth_enabled=False` parameter when creat
deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, auth_enabled=False) ```
-For information on authenticating from a client application, see the [Consume an Azure Machine Learning model deployed as a web service](../how-to-consume-web-service.md).
+For information on authenticating from a client application, see the [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
### Authentication with keys
print(primary)
### Authentication with tokens
-To enable token authentication, set the `token_auth_enabled=True` parameter when you are creating or updating a deployment. The following example enables token authentication using the SDK:
+To enable token authentication, set the `token_auth_enabled=True` parameter when you're creating or updating a deployment. The following example enables token authentication using the SDK:
```python deployment_config = AksWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, token_auth_enabled=True)
Microsoft Defender for Cloud provides unified security management and advanced t
* [Use Azure RBAC for Kubernetes authorization](../../aks/manage-azure-rbac.md) * [Secure inferencing environment with Azure Virtual Network](how-to-secure-inferencing-vnet.md) * [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
-* [Deployment troubleshooting](../how-to-troubleshoot-deployment.md)
+* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
* [Update web service](../how-to-deploy-update-web-service.md) * [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Consume a ML Model deployed as a web service](../how-to-consume-web-service.md)
+* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
-* [Collect data for models in production](../how-to-enable-data-collection.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md
print("label:", y_test[random_index])
print("prediction:", resp.text) ```
-For more information on creating a client application, see [Create client to consume deployed web service](../how-to-consume-web-service.md).
+For more information on creating a client application, see [Create client to consume deployed web service](how-to-consume-web-service.md).
## Clean up the resources
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local-container-notebook-vm.md
+
+ Title: Deploy models to compute instances
+
+description: 'Learn how to deploy your Azure Machine Learning models as a web service using compute instances.'
+++++++ Last updated : 04/22/2021++
+# Deploy a model locally
+
+Learn how to use Azure Machine Learning to deploy a model as a web service on your Azure Machine Learning compute instance. Use compute instances if one of the following conditions is true:
+
+- You need to quickly deploy and validate your model.
+- You are testing a model that is under development.
+
+> [!TIP]
+> Deploying a model from a Jupyter Notebook on a compute instance, to a web service on the same VM is a _local deployment_. In this case, the 'local' computer is the compute instance.
++
+## Prerequisites
+
+- An Azure Machine Learning workspace with a compute instance running. For more information, see [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md).
+
+## Deploy to the compute instances
+
+An example notebook that demonstrates local deployments is included on your compute instance. Use the following steps to load the notebook and deploy the model as a web service on the VM:
+
+1. From [Azure Machine Learning studio](https://ml.azure.com), select "Notebooks", and then select how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local.ipynb under "Sample notebooks". Clone this notebook to your user folder.
+
+1. Find the notebook cloned in step 1, choose or create a Compute Instance to run the notebook.
+
+ ![Screenshot of the running local service on notebook](./media/how-to-deploy-local-container-notebook-vm/deploy-local-service.png)
++
+1. The notebook displays the URL and port that the service is running on. For example, `https://localhost:6789`. You can also run the cell containing `print('Local service port: {}'.format(local_service.port))` to display the port.
+
+ ![Screenshot of the running local service port](./media/how-to-deploy-local-container-notebook-vm/deploy-local-service-port.png)
+
+1. To test the service from a compute instance, use the `https://localhost:<local_service.port>` URL. To test from a remote client, get the public URL of the service running on the compute instance. The public URL can be determined use the following formula;
+ * Notebook VM: `https://<vm_name>-<local_service_port>.<azure_region_of_workspace>.notebooks.azureml.net/score`.
+ * Compute instance: `https://<vm_name>-<local_service_port>.<azure_region_of_workspace>.instances.azureml.net/score`.
+
+ For example,
+ * Notebook VM: `https://vm-name-6789.northcentralus.notebooks.azureml.net/score`
+ * Compute instance: `https://vm-name-6789.northcentralus.instances.azureml.net/score`
+
+## Test the service
+
+To submit sample data to the running service, use the following code. Replace the value of `service_url` with the URL of from the previous step:
+
+> [!NOTE]
+> When authenticating to a deployment on the compute instance, the authentication is made using Azure Active Directory. The call to `interactive_auth.get_authentication_header()` in the example code authenticates you using AAD, and returns a header that can then be used to authenticate to the service on the compute instance. For more information, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md#use-interactive-authentication).
+>
+> When authenticating to a deployment on Azure Kubernetes Service or Azure Container Instances, a different authentication method is used. For more information on, see [Configure authentication for Azure Machine models deployed as web services](how-to-authenticate-web-service.md).
+
+```python
+import requests
+import json
+from azureml.core.authentication import InteractiveLoginAuthentication
+
+# Get a token to authenticate to the compute instance from remote
+interactive_auth = InteractiveLoginAuthentication()
+auth_header = interactive_auth.get_authentication_header()
+
+# Create and submit a request using the auth header
+headers = auth_header
+# Add content type header
+headers.update({'Content-Type':'application/json'})
+
+# Sample data to send to the service
+test_sample = json.dumps({'data': [
+ [1,2,3,4,5,6,7,8,9,10],
+ [10,9,8,7,6,5,4,3,2,1]
+]})
+test_sample = bytes(test_sample,encoding = 'utf8')
+
+# Replace with the URL for your compute instance, as determined from the previous section
+service_url = "https://vm-name-6789.northcentralus.notebooks.azureml.net/score"
+# for a compute instance, the url would be https://vm-name-6789.northcentralus.instances.azureml.net/score
+resp = requests.post(service_url, test_sample, headers=headers)
+print("prediction:", resp.text)
+```
+
+## Next steps
+
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Deployment troubleshooting](how-to-troubleshoot-deployment.md)
+* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
+* [Consume a ML Model deployed as a web service](how-to-consume-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-local.md
+
+ Title: How to run and deploy locally
+
+description: 'This article describes how to use your local computer as a target for training, debugging, or deploying models created in Azure Machine Learning.'
+++++ Last updated : 08/15/2022++++
+# Deploy models trained with Azure Machine Learning on your local machines
++
+This article describes how to use your local computer as a target for training or deploying models created in Azure Machine Learning. Azure Machine Learning is flexible enough to work with most Python machine learning frameworks. Machine learning solutions generally have complex dependencies that can be difficult to duplicate. This article will show you how to balance total control with ease of use.
+
+Scenarios for local deployment include:
+
+* Quickly iterating data, scripts, and models early in a project.
+* Debugging and troubleshooting in later stages.
+* Final deployment on user-managed hardware.
+
+## Prerequisites
+
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](../quickstart-create-resources.md).
+- A model and an environment. If you don't have a trained model, you can use the model and dependency files provided in [this tutorial](../tutorial-train-deploy-notebook.md).
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+- A conda manager, like Anaconda or Miniconda, if you want to mirror Azure Machine Learning package dependencies.
+- Docker, if you want to use a containerized version of the Azure Machine Learning environment.
+
+## Prepare your local machine
+
+The most reliable way to locally run an Azure Machine Learning model is with a Docker image. A Docker image provides an isolated, containerized experience that duplicates, except for hardware issues, the Azure execution environment. For more information on installing and configuring Docker for development scenarios, see [Overview of Docker remote development on Windows](/windows/dev-environment/docker/overview).
+
+It's possible to attach a debugger to a process running in Docker. (See [Attach to a running container](https://code.visualstudio.com/docs/remote/attach-container).) But you might prefer to debug and iterate your Python code without involving Docker. In this scenario, it's important that your local machine uses the same libraries that are used when you run your experiment in Azure Machine Learning. To manage Python dependencies, Azure uses [conda](https://docs.conda.io/). You can re-create the environment by using other package managers, but installing and configuring conda on your local machine is the easiest way to synchronize.
+
+> [!IMPORTANT]
+> GPU base images can't be used for local deployment, unless the local deployment is on an Azure Machine Learning compute instance. GPU base images are supported only on Microsoft Azure Services such as Azure Machine Learning compute clusters and instances, Azure Container Instance (ACI), Azure VMs, or Azure Kubernetes Service (AKS).
+
+## Prepare your entry script
+
+Even if you use Docker to manage the model and dependencies, the Python scoring script must be local. The script must have two methods:
+
+- An `init()` method that takes no arguments and returns nothing
+- A `run()` method that takes a JSON-formatted string and returns a JSON-serializable object
+
+The argument to the `run()` method will be in this form:
+
+```json
+{
+ "data": <model-specific-data-structure>
+}
+```
+
+The object you return from the `run()` method must implement `toJSON() -> string`.
+
+The following example demonstrates how to load a registered scikit-learn model and score it by using NumPy data. This example is based on the model and dependencies of [this tutorial](../tutorial-train-deploy-notebook.md).
+
+```python
+import json
+import numpy as np
+import os
+import pickle
+import joblib
+
+def init():
+ global model
+ # AZUREML_MODEL_DIR is an environment variable created during deployment.
+ # It's the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION).
+ # For multiple models, it points to the folder containing all deployed models (./azureml-models).
+ model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_mnist_model.pkl')
+ model = joblib.load(model_path)
+
+def run(raw_data):
+ data = np.array(json.loads(raw_data)['data'])
+ # Make prediction.
+ y_hat = model.predict(data)
+ # You can return any data type as long as it's JSON-serializable.
+ return y_hat.tolist()
+```
+
+For more advanced examples, including automatic Swagger schema generation and scoring binary data (for example, images), see [Advanced entry script authoring](how-to-deploy-advanced-entry-script.md).
+
+## Deploy as a local web service by using Docker
+
+The easiest way to replicate the environment used by Azure Machine Learning is to deploy a web service by using Docker. With Docker running on your local machine, you will:
+
+1. Connect to the Azure Machine Learning workspace in which your model is registered.
+1. Create a `Model` object that represents the model.
+1. Create an `Environment` object that contains the dependencies and defines the software environment in which your code will run.
+1. Create an `InferenceConfig` object that associates the entry script with the `Environment`.
+1. Create a `DeploymentConfiguration` object of the subclass `LocalWebserviceDeploymentConfiguration`.
+1. Use `Model.deploy()` to create a `Webservice` object. This method downloads the Docker image and associates it with the `Model`, `InferenceConfig`, and `DeploymentConfiguration`.
+1. Activate the `Webservice` by using `Webservice.wait_for_deployment()`.
+
+The following code shows these steps:
+
+```python
+from azureml.core.webservice import LocalWebservice
+from azureml.core.model import InferenceConfig
+from azureml.core.environment import Environment
+from azureml.core import Workspace
+from azureml.core.model import Model
+
+ws = Workspace.from_config()
+model = Model(ws, 'sklearn_mnist')
++
+myenv = Environment.get(workspace=ws, name="tutorial-env", version="1")
+inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
+
+deployment_config = LocalWebservice.deploy_configuration(port=6789)
+
+local_service = Model.deploy(workspace=ws,
+ name='sklearn-mnist-local',
+ models=[model],
+ inference_config=inference_config,
+ deployment_config = deployment_config)
+
+local_service.wait_for_deployment(show_output=True)
+print(f"Scoring URI is : {local_service.scoring_uri}")
+```
+
+The call to `Model.deploy()` can take a few minutes. After you've initially deployed the web service, it's more efficient to use the `update()` method rather than starting from scratch. See [Update a deployed web service](how-to-deploy-update-web-service.md).
++
+### Test your local deployment
+
+When you run the previous deployment script, it will output the URI to which you can POST data for scoring (for example, `http://localhost:6789/score`). The following sample shows a script that scores sample data by using the `"sklearn-mnist-local"` locally deployed model. The model, if properly trained, infers that `normalized_pixel_values` should be interpreted as a "2".
+
+```python
+import requests
+
+normalized_pixel_values = "[\
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 0.5, 0.7, 1.0, 1.0, 0.6, 0.4, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.9, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7, 1.0, 1.0, 1.0, 0.8, 0.6, 0.7, 1.0, 1.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.2, 1.0, 1.0, 0.8, 0.1, 0.0, 0.0, 0.0, 0.8, 1.0, 0.5, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 1.0, 0.8, 0.1, 0.0, 0.0, 0.0, 0.5, 1.0, 1.0, 0.3, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.1, 0.0, 0.0, 0.0, 0.0, 0.8, 1.0, 1.0, 0.3, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 1.0, 1.0, 0.8, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 1.0, 1.0, 0.9, 0.2, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 1.0, 1.0, 0.6, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.7, 1.0, 1.0, 0.6, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.9, 1.0, 0.9, 0.1, \
+0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8, 1.0, 1.0, 0.6, \
+0.6, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.3, 1.0, 1.0, 0.7, \
+0.7, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1, 0.8, 1.0, 1.0, \
+1.0, 0.6, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.5, 1.0, 1.0, \
+1.0, 0.7, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, \
+1.0, 1.0, 0.1, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, \
+1.0, 1.0, 1.0, 0.2, 0.1, 0.1, 0.1, 0.1, 0.0, 0.0, 0.0, 0.1, 0.1, 0.1, 0.6, 0.6, 0.6, 0.6, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.7, 0.6, 0.7, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.7, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.7, 0.5, 0.5, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.5, 0.5, 0.5, 0.5, 0.7, 1.0, 1.0, 1.0, 0.6, 0.5, 0.5, 0.2, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, \
+0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]"
+
+input_data = "{\"data\": [" + normalized_pixel_values + "]}"
+
+headers = {'Content-Type': 'application/json'}
+
+scoring_uri = "http://localhost:6789/score"
+resp = requests.post(scoring_uri, input_data, headers=headers)
+
+print("Should be predicted as '2'")
+print("prediction:", resp.text)
+```
+
+## Download and run your model directly
+
+Using Docker to deploy your model as a web service is the most common option. But you might want to run your code directly by using local Python scripts. You'll need two important components:
+
+- The model itself
+- The dependencies upon which the model relies
+
+You can download the model:
+
+- From the portal, by selecting the **Models** tab, selecting the desired model, and on the **Details** page, selecting **Download**.
+- From the command line, by using `az ml model download`. (See [model download.](/cli/azure/ml/model#az-ml-model-download))
+- By using the Python SDK `Model.download()` method. (See [Model class.](/python/api/azureml-core/azureml.core.model.model#download-target-direxist-ok-false--exists-ok-none-))
+
+An Azure model may be in whatever form your framework uses but is generally one or more serialized Python objects, packaged as a Python pickle file (`.pkl` extension). The contents of the pickle file depend on the machine learning library or technique used to train the model. For example, if you're using the model from the tutorial, you might load the model with:
+
+```python
+import pickle
+
+with open('sklearn_mnist_model.pkl', 'rb') as f :
+ logistic_model = pickle.load(f, encoding='latin1')
+```
+
+Dependencies are always tricky to get right, especially with machine learning, where there can often be a dizzying web of specific version requirements. You can re-create an Azure Machine Learning environment on your local machine either as a complete conda environment or as a Docker image by using the `build_local()` method of the `Environment` class:
+
+```python
+ws = Workspace.from_config()
+myenv = Environment.get(workspace=ws, name="tutorial-env", version="1")
+myenv.build_local(workspace=ws, useDocker=False) #Creates conda environment.
+```
+
+If you set the `build_local()` `useDocker` argument to `True`, the function will create a Docker image rather than a conda environment. If you want more control, you can use the `save_to_directory()` method of `Environment`, which writes conda_dependencies.yml and azureml_environment.json definition files that you can fine-tune and use as the basis for extension.
+
+The `Environment` class has many other methods for synchronizing environments across your compute hardware, your Azure workspace, and Docker images. For more information, see [Environment class](/python/api/azureml-core/azureml.core.environment(class)).
+
+After you download the model and resolve its dependencies, there are no Azure-defined restrictions on how you perform scoring, fine-tune the model, use transfer learning, and so forth.
+
+## Upload a retrained model to Azure Machine Learning
+
+If you have a locally trained or retrained model, you can register it with Azure. After it's registered, you can continue tuning it by using Azure compute or deploy it by using Azure facilities like [Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) or [Triton Inference Server (Preview)](../how-to-deploy-with-triton.md).
+
+To be used with the Azure Machine Learning Python SDK, a model must be stored as a serialized Python object in pickle format (a `.pkl` file). It must also implement a `predict(data)` method that returns a JSON-serializable object. For example, you might store a locally trained scikit-learn diabetes model with:
+
+```python
+import joblib
+
+from sklearn.datasets import load_diabetes
+from sklearn.linear_model import Ridge
+
+dataset_x, dataset_y = load_diabetes(return_X_y=True)
+
+sk_model = Ridge().fit(dataset_x, dataset_y)
+
+joblib.dump(sk_model, "sklearn_regression_model.pkl")
+```
+
+To make the model available in Azure, you can then use the `register()` method of the `Model` class:
+
+```python
+from azureml.core.model import Model
+
+model = Model.register(model_path="sklearn_regression_model.pkl",
+ model_name="sklearn_regression_model",
+ tags={'area': "diabetes", 'type': "regression"},
+ description="Ridge regression model to predict diabetes",
+ workspace=ws)
+```
+
+You can then find your newly registered model on the Azure Machine Learning **Model** tab:
++
+For more information on uploading and updating models and environments, see [Register model and deploy locally with advanced usages](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local-advanced.ipynb).
+
+## Next steps
+
+- For information on using VS Code with Azure Machine Learning, see [Connect to compute instance in Visual Studio Code (preview)](../how-to-set-up-vs-code-remote.md)
+- For more information on managing environments, see [Create & use software environments in Azure Machine Learning](how-to-use-environments.md).
+- To learn about accessing data from your datastore, see [Connect to storage services on Azure](how-to-access-data.md).
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/Mach
## Next steps * [Manage your models](concept-model-management-and-deployment.md).
-* Monitor your production models for [data drift](../how-to-enable-data-collection.md).
+* Monitor your production models for [data drift](how-to-enable-data-collection.md).
* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md).
machine-learning How To Deploy Model Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-designer.md
+
+ Title: Use the studio to deploy models trained in the designer
+
+description: Use Azure Machine Learning studio to deploy machine learning models without writing a single line of code.
+++++ Last updated : 08/15/2022++++
+# Use the studio to deploy models trained in the designer
+
+In this article, you learn how to deploy a designer model as an online (real-time) endpoint in Azure Machine Learning studio.
+
+Once registered or downloaded, you can use designer trained models just like any other model. Exported models can be deployed in use cases such as internet of things (IoT) and local deployments.
+
+Deployment in the studio consists of the following steps:
+
+1. Register the trained model.
+1. Download the entry script and conda dependencies file for the model.
+1. (Optional) Configure the entry script.
+1. Deploy the model to a compute target.
+
+You can also deploy models directly in the designer to skip model registration and file download steps. This can be useful for rapid deployment. For more information see, [Deploy a model with the designer](../tutorial-designer-automobile-price-deploy.md).
+
+Models trained in the designer can also be deployed through the SDK or command-line interface (CLI). For more information, see [Deploy your existing model with Azure Machine Learning](how-to-deploy-and-where.md).
+
+## Prerequisites
+
+* [An Azure Machine Learning workspace](../quickstart-create-resources.md)
+
+* A completed training pipeline containing one of following components:
+ - [Train Model Component](../algorithm-module-reference/train-model.md)
+ - [Train Anomaly Detection Model component](../algorithm-module-reference/train-anomaly-detection-model.md)
+ - [Train Clustering Model component](../algorithm-module-reference/train-clustering-model.md)
+ - [Train Pytorch Model component](../algorithm-module-reference/train-pytorch-model.md)
+ - [Train SVD Recommender component](../algorithm-module-reference/train-svd-recommender.md)
+ - [Train Vowpal Wabbit Model component](../algorithm-module-reference/train-vowpal-wabbit-model.md)
+ - [Train Wide & Deep Model component](../algorithm-module-reference/train-wide-and-deep-recommender.md)
+
+## Register the model
+
+After the training pipeline completes, register the trained model to your Azure Machine Learning workspace to access the model in other projects.
+
+1. Select the [Train Model component](../algorithm-module-reference/train-model.md).
+1. Select the **Outputs + logs** tab in the right pane.
+1. Select the **Register Model** icon ![Screenshot of the gear icon](./media/how-to-deploy-model-designer/register-model-icon.png).
+
+ ![Screenshot of right pane of Train Model component](./media/how-to-deploy-model-designer/train-model-right-pane.png)
+
+1. Enter a name for your model, then select **Save**.
+
+After registering your model, you can find it in the **Models** asset page in the studio.
+
+![Screenshot of registered model in the Models asset page](./media/how-to-deploy-model-designer/models-asset-page.png)
+
+## Download the entry script file and conda dependencies file
+
+You need the following files to deploy a model in Azure Machine Learning studio:
+
+- **Entry script file** - loads the trained model, processes input data from requests, does real-time inferences, and returns the result. The designer automatically generates a `score.py` entry script file when the **Train Model** component completes.
+
+- **Conda dependencies file** - specifies which pip and conda packages your webservice depends on. The designer automatically creates a `conda_env.yaml` file when the **Train Model** component completes.
+
+You can download these two files in the right pane of the **Train Model** component:
+
+1. Select the **Train Model** component.
+1. In the **Outputs + logs** tab, select the folder `trained_model_outputs`.
+1. Download the `conda_env.yaml` file and `score.py` file.
+
+ ![Screenshot of download files for deployment in right pane](./media/how-to-deploy-model-designer/download-artifacts-in-right-pane.png)
+
+Alternatively, you can download the files from the **Models** asset page after registering your model:
+
+1. Navigate to the **Models** asset page.
+1. Select the model you want to deploy.
+1. Select the **Artifacts** tab.
+1. Select the `trained_model_outputs` folder.
+1. Download the `conda_env.yaml` file and `score.py` file.
+
+ ![Screenshot of download files for deployment in model detail page](./media/how-to-deploy-model-designer/download-artifacts-in-models-page.png)
+
+> [!NOTE]
+> The `score.py` file provides nearly the same functionality as the **Score Model** components. However, some components like [Score SVD Recommender](../algorithm-module-reference/score-svd-recommender.md), [Score Wide and Deep Recommender](../algorithm-module-reference/score-wide-and-deep-recommender.md), and [Score Vowpal Wabbit Model](../algorithm-module-reference/score-vowpal-wabbit-model.md) have parameters for different scoring modes. You can also change those parameters in the entry script.
+>
+>For more information on setting parameters in the `score.py` file, see the section, [Configure the entry script](#configure-the-entry-script).
+
+## Deploy the model
+
+After downloading the necessary files, you're ready to deploy the model.
+
+1. In the **Models** asset page, select the registered model.
+1. Select **Deploy** and select **Deploy to web service**.
+ ![Screenshot of deploy button in model asset page](./media/how-to-deploy-model-designer/open-deploy-wizard.png)
+1. In the configuration menu, enter the following information:
+
+ - Input a name for the endpoint.
+ - Select to deploy the model to [Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md) or [Azure Container Instance](how-to-deploy-azure-container-instance.md).
+ - Upload the `score.py` for the **Entry script file**.
+ - Upload the `conda_env.yml` for the **Conda dependencies file**.
+
+ >[!TIP]
+ > In **Advanced** setting, you can set CPU/Memory capacity and other parameters for deployment. These settings are important for certain models such as PyTorch models, which consume considerable amount of memery (about 4 GB).
+
+1. Select **Deploy** to deploy your model as an online endpoint.
+
+ ![Screenshot of deploy model in model asset page](./media/how-to-deploy-model-designer/deploy-model.png)
+
+## Consume the online endpoint
+
+After deployment succeeds, you can find the endpoint in the **Endpoints** asset page. Once there, you will find a REST endpoint, which clients can use to submit requests to the endpoint.
+
+> [!NOTE]
+> The designer also generates a sample data json file for testing, you can download `_samples.json` in the **trained_model_outputs** folder.
+
+Use the following code sample to consume an online endpoint.
+
+```python
+
+import json
+from pathlib import Path
+from azureml.core.workspace import Workspace, Webservice
+
+service_name = 'YOUR_SERVICE_NAME'
+ws = Workspace.get(
+ name='WORKSPACE_NAME',
+ subscription_id='SUBSCRIPTION_ID',
+ resource_group='RESOURCEGROUP_NAME'
+)
+service = Webservice(ws, service_name)
+sample_file_path = '_samples.json'
+
+with open(sample_file_path, 'r') as f:
+ sample_data = json.load(f)
+score_result = service.run(json.dumps(sample_data))
+print(f'Inference result = {score_result}')
+```
+
+### Consume computer vision related online endpoints
+
+When consuming computer vision related online endpoints, you need to convert images to bytes, since web service only accepts string as input. Following is the sample code:
+
+```python
+import base64
+import json
+from copy import deepcopy
+from pathlib import Path
+from azureml.studio.core.io.image_directory import (IMG_EXTS, image_from_file, image_to_bytes)
+from azureml.studio.core.io.transformation_directory import ImageTransformationDirectory
+
+# image path
+image_path = Path('YOUR_IMAGE_FILE_PATH')
+
+# provide the same parameter setting as in the training pipeline. Just an example here.
+image_transform = [
+ # format: (op, args). {} means using default parameter values of torchvision.transforms.
+ # See https://pytorch.org/docs/stable/torchvision/transforms.html
+ ('Resize', 256),
+ ('CenterCrop', 224),
+ # ('Pad', 0),
+ # ('ColorJitter', {}),
+ # ('Grayscale', {}),
+ # ('RandomResizedCrop', 256),
+ # ('RandomCrop', 224),
+ # ('RandomHorizontalFlip', {}),
+ # ('RandomVerticalFlip', {}),
+ # ('RandomRotation', 0),
+ # ('RandomAffine', 0),
+ # ('RandomGrayscale', {}),
+ # ('RandomPerspective', {}),
+]
+transform = ImageTransformationDirectory.create(transforms=image_transform).torch_transform
+
+# download _samples.json file under Outputs+logs tab in the right pane of Train Pytorch Model component
+sample_file_path = '_samples.json'
+with open(sample_file_path, 'r') as f:
+ sample_data = json.load(f)
+
+# use first sample item as the default value
+default_data = sample_data[0]
+data_list = []
+for p in image_path.iterdir():
+ if p.suffix.lower() in IMG_EXTS:
+ data = deepcopy(default_data)
+ # convert image to bytes
+ data['image'] = base64.b64encode(image_to_bytes(transform(image_from_file(p)))).decode()
+ data_list.append(data)
+
+# use data.json as input of consuming the endpoint
+data_file_path = 'data.json'
+with open(data_file_path, 'w') as f:
+ json.dump(data_list, f)
+```
+
+## Configure the entry script
+
+Some components in the designer like [Score SVD Recommender](../algorithm-module-reference/score-svd-recommender.md), [Score Wide and Deep Recommender](../algorithm-module-reference/score-wide-and-deep-recommender.md), and [Score Vowpal Wabbit Model](../algorithm-module-reference/score-vowpal-wabbit-model.md) have parameters for different scoring modes.
+
+In this section, you learn how to update these parameters in the entry script file too.
+
+The following example updates the default behavior for a trained **Wide & Deep recommender** model. By default, the `score.py` file tells the web service to predict ratings between users and items.
+
+You can modify the entry script file to make item recommendations, and return recommended items by changing the `recommender_prediction_kind` parameter.
++
+```python
+import os
+import json
+from pathlib import Path
+from collections import defaultdict
+from azureml.studio.core.io.model_directory import ModelDirectory
+from azureml.designer.modules.recommendation.dnn.wide_and_deep.score. \
+ score_wide_and_deep_recommender import ScoreWideAndDeepRecommenderModule
+from azureml.designer.serving.dagengine.utils import decode_nan
+from azureml.designer.serving.dagengine.converter import create_dfd_from_dict
+
+model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'trained_model_outputs')
+schema_file_path = Path(model_path) / '_schema.json'
+with open(schema_file_path) as fp:
+ schema_data = json.load(fp)
++
+def init():
+ global model
+ model = ModelDirectory.load(load_from_dir=model_path)
++
+def run(data):
+ data = json.loads(data)
+ input_entry = defaultdict(list)
+ for row in data:
+ for key, val in row.items():
+ input_entry[key].append(decode_nan(val))
+
+ data_frame_directory = create_dfd_from_dict(input_entry, schema_data)
+
+ # The parameter names can be inferred from Score Wide and Deep Recommender component parameters:
+ # convert the letters to lower cases and replace whitespaces to underscores.
+ score_params = dict(
+ trained_wide_and_deep_recommendation_model=model,
+ dataset_to_score=data_frame_directory,
+ training_data=None,
+ user_features=None,
+ item_features=None,
+ ################### Note #################
+ # Set 'Recommender prediction kind' parameter to enable item recommendation model
+ recommender_prediction_kind='Item Recommendation',
+ recommended_item_selection='From All Items',
+ maximum_number_of_items_to_recommend_to_a_user=5,
+ whether_to_return_the_predicted_ratings_of_the_items_along_with_the_labels='True')
+ result_dfd, = ScoreWideAndDeepRecommenderModule().run(**score_params)
+ result_df = result_dfd.data
+ return json.dumps(result_df.to_dict("list"))
+```
+
+For **Wide & Deep recommender** and **Vowpal Wabbit** models, you can configure the scoring mode parameter using the following methods:
+
+- The parameter names are the lowercase and underscore combinations of parameter names for [Score Vowpal Wabbit Model](../algorithm-module-reference/score-vowpal-wabbit-model.md) and [Score Wide and Deep Recommender](../algorithm-module-reference/score-wide-and-deep-recommender.md);
+- Mode type parameter values are strings of the corresponding option names. Take **Recommender prediction kind** in the above codes as example, the value can be `'Rating Prediction'`or `'Item Recommendation'`. Other values are not allowed.
+
+For **SVD recommender** trained model, the parameter names and values maybe less obvious, and you can look up the tables below to decide how to set parameters.
+
+| Parameter name in [Score SVD Recommender](../algorithm-module-reference/score-svd-recommender.md) | Parameter name in the entry script file |
+| | |
+| Recommender prediction kind | prediction_kind |
+| Recommended item selection | recommended_item_selection |
+| Minimum size of the recommendation pool for a single user | min_recommendation_pool_size |
+| Maximum number of items to recommend to a user | max_recommended_item_count |
+| Whether to return the predicted ratings of the items along with the labels | return_ratings |
+
+The following code shows you how to set parameters for an SVD recommender, which uses all six parameters to recommend rated items with predicted ratings attached.
+
+```python
+score_params = dict(
+ learner=model,
+ test_data=DataTable.from_dfd(data_frame_directory),
+ training_data=None,
+ # RecommenderPredictionKind has 2 members, 'RatingPrediction' and 'ItemRecommendation'. You
+ # can specify prediction_kind parameter with one of them.
+ prediction_kind=RecommenderPredictionKind.ItemRecommendation,
+ # RecommendedItemSelection has 3 members, 'FromAllItems', 'FromRatedItems', 'FromUndatedItems'.
+ # You can specify recommended_item_selection parameter with one of them.
+ recommended_item_selection=RecommendedItemSelection.FromRatedItems,
+ min_recommendation_pool_size=1,
+ max_recommended_item_count=3,
+ return_ratings=True,
+ )
+```
++
+## Next steps
+
+* [Train a model in the designer](../tutorial-designer-automobile-price-train-score.md)
+* [Deploy models with Azure Machine Learning SDK](how-to-deploy-and-where.md)
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
+* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Create client applications to consume web services](how-to-consume-web-service.md)
+* [Update web service](how-to-deploy-update-web-service.md)
machine-learning How To Deploy Package Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-package-models.md
+
+ Title: Package models
+
+description: 'Package a model. Models can be packaged as either a docker image, which you can then download, or you can create a Dockerfile and use it to build the image.'
+++ Last updated : 10/21/2021+++++++
+# How to package a registered model with Docker
+
+This article shows how to package a registered Azure Machine Learning model with Docker.
+
+## Prerequisites
+
+This article assumes you have already trained and registered a model in your machine learning workspace. To learn how to train and register a scikit-learn model, [follow this tutorial](../how-to-train-scikit-learn.md).
++
+## Package models
+
+In some cases, you might want to create a Docker image without deploying the model. Or you might want to download the image and run it on a local Docker installation. You might even want to download the files used to build the image, inspect them, modify them, and build the image manually.
+
+Model packaging enables you to do these things. It packages all the assets needed to host a model as a web service and allows you to download either a fully built Docker image or the files needed to build one. There are two ways to use model packaging:
+
+**Download a packaged model:** Download a Docker image that contains the model and other files needed to host it as a web service.
+
+**Generate a Dockerfile:** Download the Dockerfile, model, entry script, and other assets needed to build a Docker image. You can then inspect the files or make changes before you build the image locally.
+
+Both packages can be used to get a local Docker image.
+
+> [!TIP]
+> Creating a package is similar to deploying a model. You use a registered model and an inference configuration.
+
+> [!IMPORTANT]
+> To download a fully built image or build an image locally, you need to have [Docker](https://www.docker.com) installed in your development environment.
+
+### Download a packaged model
+
+The following example builds an image, which is registered in the Azure container registry for your workspace:
+
+```python
+package = Model.package(ws, [model], inference_config)
+package.wait_for_creation(show_output=True)
+```
+
+After you create a package, you can use `package.pull()` to pull the image to your local Docker environment. The output of this command will display the name of the image. For example:
+
+`Status: Downloaded newer image for myworkspacef78fd10.azurecr.io/package:20190822181338`.
+
+After you download the model, use the `docker images` command to list the local images:
+
+```text
+REPOSITORY TAG IMAGE ID CREATED SIZE
+myworkspacef78fd10.azurecr.io/package 20190822181338 7ff48015d5bd 4 minutes ago 1.43 GB
+```
+
+To start a local container based on this image, use the following command to start a named container from the shell or command line. Replace the `<imageid>` value with the image ID returned by the `docker images` command.
+
+```bash
+docker run -p 6789:5001 --name mycontainer <imageid>
+```
+
+This command starts the latest version of the image named `myimage`. It maps local port 6789 to the port in the container on which the web service is listening (5001). It also assigns the name `mycontainer` to the container, which makes the container easier to stop. After the container is started, you can submit requests to `http://localhost:6789/score`.
+
+### Generate a Dockerfile and dependencies
+
+The following example shows how to download the Dockerfile, model, and other assets needed to build an image locally. The `generate_dockerfile=True` parameter indicates that you want the files, not a fully built image.
+
+```python
+package = Model.package(ws, [model], inference_config, generate_dockerfile=True)
+package.wait_for_creation(show_output=True)
+# Download the package.
+package.save("./imagefiles")
+# Get the Azure container registry that the model/Dockerfile uses.
+acr=package.get_container_registry()
+print("Address:", acr.address)
+print("Username:", acr.username)
+print("Password:", acr.password)
+```
+
+This code downloads the files needed to build the image to the `imagefiles` directory. The Dockerfile included in the saved files references a base image stored in an Azure container registry. When you build the image on your local Docker installation, you need to use the address, user name, and password to authenticate to the registry. Use the following steps to build the image by using a local Docker installation:
+
+1. From a shell or command-line session, use the following command to authenticate Docker with the Azure container registry. Replace `<address>`, `<username>`, and `<password>` with the values retrieved by `package.get_container_registry()`.
+
+ ```bash
+ docker login <address> -u <username> -p <password>
+ ```
+
+2. To build the image, use the following command. Replace `<imagefiles>` with the path of the directory where `package.save()` saved the files.
+
+ ```bash
+ docker build --tag myimage <imagefiles>
+ ```
+
+ This command sets the image name to `myimage`.
+
+To verify that the image is built, use the `docker images` command. You should see the `myimage` image in the list:
+
+```text
+REPOSITORY TAG IMAGE ID CREATED SIZE
+<none> <none> 2d5ee0bf3b3b 49 seconds ago 1.43 GB
+myimage latest 739f22498d64 3 minutes ago 1.43 GB
+```
+
+To start a new container based on this image, use the following command:
+
+```bash
+docker run -p 6789:5001 --name mycontainer myimage:latest
+```
+
+This command starts the latest version of the image named `myimage`. It maps local port 6789 to the port in the container on which the web service is listening (5001). It also assigns the name `mycontainer` to the container, which makes the container easier to stop. After the container is started, you can submit requests to `http://localhost:6789/score`.
+
+### Example client to test the local container
+
+The following code is an example of a Python client that can be used with the container:
+
+```python
+import requests
+import json
+
+# URL for the web service.
+scoring_uri = 'http://localhost:6789/score'
+
+# Two sets of data to score, so we get two results back.
+data = {"data":
+ [
+ [ 1,2,3,4,5,6,7,8,9,10 ],
+ [ 10,9,8,7,6,5,4,3,2,1 ]
+ ]
+ }
+# Convert to JSON string.
+input_data = json.dumps(data)
+
+# Set the content type.
+headers = {'Content-Type': 'application/json'}
+
+# Make the request and display the response.
+resp = requests.post(scoring_uri, input_data, headers=headers)
+print(resp.text)
+```
+
+For example clients in other programming languages, see [Consume models deployed as web services](how-to-consume-web-service.md).
+
+### Stop the Docker container
+
+To stop the container, use the following command from a different shell or command line:
+
+```bash
+docker kill mycontainer
+```
+
+## Next steps
+
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
+* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
+* [Create client applications to consume web services](how-to-consume-web-service.md)
+* [Update web service](how-to-deploy-update-web-service.md)
+* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
+* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
+* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
+* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-profile-model.md
az ml model profile -g <resource-group-name> -w <workspace-name> --inference-con
## Next steps
-* [Troubleshoot a failed deployment](../how-to-troubleshoot-deployment.md)
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
-* [Create client applications to consume web services](../how-to-consume-web-service.md)
-* [Update web service](../how-to-deploy-update-web-service.md)
+* [Create client applications to consume web services](how-to-consume-web-service.md)
+* [Update web service](how-to-deploy-update-web-service.md)
* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
-* [Use TLS to secure a web service through Azure Machine Learning](../how-to-secure-web-service.md)
+* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
-* [Collect data for models in production](../how-to-enable-data-collection.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-update-web-service.md
For more information, see the [az ml service update](/cli/azure/ml(v1)/service#a
## Next steps
-* [Troubleshoot a failed deployment](../how-to-troubleshoot-deployment.md)
-* [Create client applications to consume web services](../how-to-consume-web-service.md)
+* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
+* [Create client applications to consume web services](how-to-consume-web-service.md)
* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md) * [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
-* [Monitor your Azure Machine Learning models with Application Insights](../how-to-enable-app-insights.md)
-* [Collect data for models in production](../how-to-enable-data-collection.md)
+* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
+* [Collect data for models in production](how-to-enable-data-collection.md)
* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
machine-learning How To Enable App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-enable-app-insights.md
Use the following steps to update an existing web service:
### Log custom traces in your service > [!IMPORTANT]
-> Azure Application Insights only logs payloads of up to 64kb. If this limit is reached, you may see errors such as out of memory, or no information may be logged. If the data you want to log is larger 64kb, you should instead store it to blob storage using the information in [Collect Data for models in production](../how-to-enable-data-collection.md).
+> Azure Application Insights only logs payloads of up to 64kb. If this limit is reached, you may see errors such as out of memory, or no information may be logged. If the data you want to log is larger 64kb, you should instead store it to blob storage using the information in [Collect Data for models in production](how-to-enable-data-collection.md).
> > For more complex situations, like model tracking within an AKS deployment, we recommend using a third-party library like [OpenCensus](https://opencensus.io).
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-enable-data-collection.md
+
+ Title: Collect data on your production models
+
+description: Learn how to collect data from an Azure Machine Learning model deployed on an Azure Kubernetes Service (AKS) cluster.
++++++ Last updated : 08/16/2022+++
+# Collect data from models in production
++
+This article shows how to collect data from an Azure Machine Learning model deployed on an Azure Kubernetes Service (AKS) cluster. The collected data is then stored in Azure Blob storage.
+
+Once collection is enabled, the data you collect helps you:
+
+* [Monitor data drifts](../how-to-monitor-datasets.md) on the production data you collect.
+
+* Analyze collected data using [Power BI](#powerbi) or [Azure Databricks](#databricks)
+
+* Make better decisions about when to retrain or optimize your model.
+
+* Retrain your model with the collected data.
+
+## What is collected and where it goes
+
+The following data can be collected:
+
+* Model input data from web services deployed in an AKS cluster. Voice audio, images, and video are *not* collected.
+
+* Model predictions using production input data.
+
+>[!NOTE]
+> Preaggregation and precalculations on this data are not currently part of the collection service.
+
+The output is saved in Blob storage. Because the data is added to Blob storage, you can choose your favorite tool to run the analysis.
+
+The path to the output data in the blob follows this syntax:
+
+```
+/modeldata/<subscriptionid>/<resourcegroup>/<workspace>/<webservice>/<model>/<version>/<designation>/<year>/<month>/<day>/data.csv
+# example: /modeldata/1a2b3c4d-5e6f-7g8h-9i10-j11k12l13m14/myresourcegrp/myWorkspace/aks-w-collv9/best_model/10/inputs/2018/12/31/data.csv
+```
+
+>[!NOTE]
+> In versions of the Azure Machine Learning SDK for Python earlier than version 0.1.0a16, the `designation` argument is named `identifier`. If you developed your code with an earlier version, you need to update it accordingly.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a
+ [free account](https://azure.microsoft.com/free/) before you begin.
+
+- An Azure Machine Learning workspace, a local directory containing your scripts, and the Azure Machine Learning SDK for Python must be installed. To learn how to install them, see [How to configure a development environment](../how-to-configure-environment.md).
+
+- You need a trained machine-learning model to be deployed to AKS. If you don't have a model, see the [Train image classification model](../tutorial-train-deploy-notebook.md) tutorial.
+
+- You need an AKS cluster. For information on how to create one and deploy to it, see [How to deploy and where](how-to-deploy-and-where.md).
+
+- [Set up your environment](../how-to-configure-environment.md) and install the [Azure Machine Learning Monitoring SDK](/python/api/overview/azure/ml/install).
+
+- Use a docker image based on Ubuntu 18.04, which is shipped with `libssl 1.0.0`, the essential dependency of [modeldatacollector](/python/api/azureml-monitoring/azureml.monitoring.modeldatacollector.modeldatacollector). You can refer to [prebuilt images](/azure/machine-learning/concept-prebuilt-docker-images-inference).
+
+## Enable data collection
+
+You can enable [data collection](/python/api/azureml-monitoring/azureml.monitoring.modeldatacollector.modeldatacollector) regardless of the model you deploy through Azure Machine Learning or other tools.
+
+To enable data collection, you need to:
+
+1. Open the scoring file.
+
+1. Add the following code at the top of the file:
+
+ ```python
+ from azureml.monitoring import ModelDataCollector
+ ```
+
+1. Declare your data collection variables in your `init` function:
+
+ ```python
+ global inputs_dc, prediction_dc
+ inputs_dc = ModelDataCollector("best_model", designation="inputs", feature_names=["feat1", "feat2", "feat3", "feat4", "feat5", "feat6"])
+ prediction_dc = ModelDataCollector("best_model", designation="predictions", feature_names=["prediction1", "prediction2"])
+ ```
+
+ *CorrelationId* is an optional parameter. You don't need to use it if your model doesn't require it. Use of *CorrelationId* does help you more easily map with other data, such as *LoanNumber* or *CustomerId*.
+
+ The *Identifier* parameter is later used for building the folder structure in your blob. You can use it to differentiate raw data from processed data.
+
+1. Add the following lines of code to the `run(input_df)` function:
+
+ ```python
+ data = np.array(data)
+ result = model.predict(data)
+ inputs_dc.collect(data) #this call is saving our input data into Azure Blob
+ prediction_dc.collect(result) #this call is saving our prediction data into Azure Blob
+ ```
+
+1. Data collection is *not* automatically set to **true** when you deploy a service in AKS. Update your configuration file, as in the following example:
+
+ ```python
+ aks_config = AksWebservice.deploy_configuration(collect_model_data=True)
+ ```
+
+ You can also enable Application Insights for service monitoring by changing this configuration:
+
+ ```python
+ aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True)
+ ```
+
+1. To create a new image and deploy the machine learning model, see [How to deploy and where](how-to-deploy-and-where.md).
+
+1. Add the 'Azure-Monitoring' pip package to the conda-dependencies of the web service environment:
+ ```Python
+ env = Environment('webserviceenv')
+ env.python.conda_dependencies = CondaDependencies.create(conda_packages=['numpy'],pip_packages=['azureml-defaults','azureml-monitoring','inference-schema[numpy-support]'])
+ ```
++
+## Disable data collection
+
+You can stop collecting data at any time. Use Python code to disable data collection.
+
+ ```python
+ ## replace <service_name> with the name of the web service
+ <service_name>.update(collect_model_data=False)
+ ```
+
+## Validate and analyze your data
+
+You can choose a tool of your preference to analyze the data collected in your Blob storage.
+
+### Quickly access your blob data
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+
+1. Open your workspace.
+
+1. Select **Storage**.
+
+ [![Select the Storage option](./media/how-to-enable-data-collection/StorageLocation.png)](././media/how-to-enable-data-collection/StorageLocation.png#lightbox)
+
+1. Follow the path to the blob's output data with this syntax:
+
+ ```
+ /modeldata/<subscriptionid>/<resourcegroup>/<workspace>/<webservice>/<model>/<version>/<designation>/<year>/<month>/<day>/data.csv
+ # example: /modeldata/1a2b3c4d-5e6f-7g8h-9i10-j11k12l13m14/myresourcegrp/myWorkspace/aks-w-collv9/best_model/10/inputs/2018/12/31/data.csv
+ ```
+
+### <a id="powerbi"></a> Analyze model data using Power BI
+
+1. Download and open [Power BI Desktop](https://www.powerbi.com).
+
+1. Select **Get Data** and select [**Azure Blob Storage**](/power-bi/desktop-data-sources).
+
+ [![Power BI blob setup](./media/how-to-enable-data-collection/PBIBlob.png)](././media/how-to-enable-data-collection/PBIBlob.png#lightbox)
+
+1. Add your storage account name and enter your storage key. You can find this information by selecting **Settings** > **Access keys** in your blob.
+
+1. Select the **model data** container and select **Edit**.
+
+ [![Power BI Navigator](./media/how-to-enable-data-collection/pbiNavigator.png)](././media/how-to-enable-data-collection/pbiNavigator.png#lightbox)
+
+1. In the query editor, click under the **Name** column and add your storage account.
+
+1. Enter your model path into the filter. If you want to look only into files from a specific year or month, just expand the filter path. For example, to look only into March data, use this filter path:
+
+ /modeldata/\<subscriptionid>/\<resourcegroupname>/\<workspacename>/\<webservicename>/\<modelname>/\<modelversion>/\<designation>/\<year>/3
+
+1. Filter the data that is relevant to you based on **Name** values. If you stored predictions and inputs, you need to create a query for each.
+
+1. Select the downward double arrows next to the **Content** column heading to combine the files.
+
+ [![Power BI Content](./media/how-to-enable-data-collection/pbiContent.png)](././media/how-to-enable-data-collection/pbiContent.png#lightbox)
+
+1. Select **OK**. The data preloads.
+
+ [![Power BI Combine Files](./media/how-to-enable-data-collection/pbiCombine.png)](././media/how-to-enable-data-collection/pbiCombine.png#lightbox)
+
+1. Select **Close and Apply**.
+
+1. If you added inputs and predictions, your tables are automatically ordered by **RequestId** values.
+
+1. Start building your custom reports on your model data.
+
+### <a id="databricks"></a> Analyze model data using Azure Databricks
+
+1. Create an [Azure Databricks workspace](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal).
+
+1. Go to your Databricks workspace.
+
+1. In your Databricks workspace, select **Upload Data**.
+
+ [![Selecting Databricks Upload Data option](./media/how-to-enable-data-collection/dbupload.png)](././media/how-to-enable-data-collection/dbupload.png#lightbox)
+
+1. Select **Create New Table** and select **Other Data Sources** > **Azure Blob Storage** > **Create Table in Notebook**.
+
+ [![Databricks table creation](./media/how-to-enable-data-collection/dbtable.PNG)](././media/how-to-enable-data-collection/dbtable.PNG#lightbox)
+
+1. Update the location of your data. Here is an example:
+
+ ```
+ file_location = "wasbs://mycontainer@storageaccountname.blob.core.windows.net/modeldata/1a2b3c4d-5e6f-7g8h-9i10-j11k12l13m14/myresourcegrp/myWorkspace/aks-w-collv9/best_model/10/inputs/2018/*/*/data.csv"
+ file_type = "csv"
+ ```
+
+ [![Databricks setup](./media/how-to-enable-data-collection/dbsetup.png)](././media/how-to-enable-data-collection/dbsetup.png#lightbox)
+
+1. Follow the steps on the template to view and analyze your data.
+
+## Next steps
+
+[Detect data drift](../how-to-monitor-datasets.md) on the data you have collected.
machine-learning How To Manage Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md
+
+ Title: Create workspaces with Azure CLI extension v1
+
+description: Learn how to use the Azure CLI extension v1 for machine learning to create a new Azure Machine Learning workspace.
++++++ Last updated : 08/12/2022++++
+# Manage Azure Machine Learning workspaces using Azure CLI extension v1
++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK or CLI extension you are using:"]
+> * [v1](how-to-manage-workspace-cli.md)
+> * [v2 (current version)](../how-to-manage-workspace-cli.md)
++
+In this article, you learn how to create and manage Azure Machine Learning workspaces using the Azure CLI. The Azure CLI provides commands for managing Azure resources and is designed to get you working quickly with Azure, with an emphasis on automation. The machine learning extension for the CLI provides commands for working with Azure Machine Learning resources.
+
+## Prerequisites
+
+* An **Azure subscription**. If you don't have one, try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* To use the CLI commands in this document from your **local environment**, you need the [Azure CLI](/cli/azure/install-azure-cli).
+
+ If you use the [Azure Cloud Shell](https://azure.microsoft.com//features/cloud-shell/), the CLI is accessed through the browser and lives in the cloud.
+
+## Limitations
+++
+### Secure CLI communications
+
+Some of the Azure CLI commands communicate with Azure Resource Manager over the internet. This communication is secured using HTTPS/TLS 1.2.
+
+With the Azure Machine Learning CLI extension v1 (`azure-cli-ml`), only some of the commands communicate with the Azure Resource Manager. Specifically, commands that create, update, delete, list, or show Azure resources. Operations such as submitting a training job communicate directly with the Azure Machine Learning workspace. **If your workspace is [secured with a private endpoint](../how-to-configure-private-link.md), that is enough to secure commands provided by the `azure-cli-ml` extension**.
++
+## Connect the CLI to your Azure subscription
+
+> [!IMPORTANT]
+> If you are using the Azure Cloud Shell, you can skip this section. The cloud shell automatically authenticates you using the account you log into your Azure subscription.
+
+There are several ways that you can authenticate to your Azure subscription from the CLI. The most simple is to interactively authenticate using a browser. To authenticate interactively, open a command line or terminal and use the following command:
+
+```azurecli-interactive
+az login
+```
+
+If the CLI can open your default browser, it will do so and load a sign-in page. Otherwise, you need to open a browser and follow the instructions on the command line. The instructions involve browsing to [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and entering an authorization code.
++
+For other methods of authenticating, see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Create a resource group
+
+The Azure Machine Learning workspace must be created inside a resource group. You can use an existing resource group or create a new one. To __create a new resource group__, use the following command. Replace `<resource-group-name>` with the name to use for this resource group. Replace `<location>` with the Azure region to use for this resource group:
+
+> [!NOTE]
+> You should select a region where Azure Machine Learning is available. For information, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=machine-learning-service).
+
+```azurecli-interactive
+az group create --name <resource-group-name> --location <location>
+```
+
+The response from this command is similar to the following JSON. You can use the output values to locate the created resources or parse them as input to subsequent CLI steps for automation.
+
+```json
+{
+ "id": "/subscriptions/<subscription-GUID>/resourceGroups/<resourcegroupname>",
+ "location": "<location>",
+ "managedBy": null,
+ "name": "<resource-group-name>",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null,
+ "type": null
+}
+```
+
+For more information on working with resource groups, see [az group](/cli/azure/group).
+
+## Create a workspace
+
+When you deploy an Azure Machine Learning workspace, various other services are [required as dependent associated resources](../concept-workspace.md#resources). When you use the CLI to create the workspace, the CLI can either create new associated resources on your behalf or you could attach existing resources.
+
+> [!IMPORTANT]
+> When attaching your own storage account, make sure that it meets the following criteria:
+>
+> * The storage account is _not_ a premium account (Premium_LRS and Premium_GRS)
+> * Both Azure Blob and Azure File capabilities enabled
+> * Hierarchical Namespace (ADLS Gen 2) is disabled
+> These requirements are only for the _default_ storage account used by the workspace.
+>
+> When attaching Azure container registry, you must have the [admin account](/azure/container-registry/container-registry-authentication#admin-account) enabled before it can be used with an Azure Machine Learning workspace.
+
+# [Create with new resources](#tab/createnewresources)
+
+To create a new workspace where the __services are automatically created__, use the following command:
+
+```azurecli-interactive
+az ml workspace create -w <workspace-name> -g <resource-group-name>
+```
+
+# [Bring existing resources](#tab/bringexistingresources1)
+
+To create a workspace that uses existing resources, you must provide the resource ID for each resource. You can get this ID either via the 'properties' tab on each resource via the Azure portal, or by running the following commands using the Azure CLI.
+
+ * **Azure Storage Account**:
+ `az storage account show --name <storage-account-name> --query "id"`
+ * **Azure Application Insights**:
+ `az monitor app-insights component show --app <application-insight-name> -g <resource-group-name> --query "id"`
+ * **Azure Key Vault**:
+ `az keyvault show --name <key-vault-name> --query "ID"`
+ * **Azure Container Registry**:
+ `az acr show --name <acr-name> -g <resource-group-name> --query "id"`
+
+ The returned resource ID has the following format: `"/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/<provider>/<subresource>/<resource-name>"`.
+
+Once you have the IDs for the resource(s) that you want to use with the workspace, use the base `az workspace create -w <workspace-name> -g <resource-group-name>` command and add the parameter(s) and ID(s) for the existing resources. For example, the following command creates a workspace that uses an existing container registry:
+
+```azurecli-interactive
+az ml workspace create -w <workspace-name>
+ -g <resource-group-name>
+ --container-registry "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.ContainerRegistry/registries/<acr-name>"
+```
+++
+> [!IMPORTANT]
+> When you attaching existing resources, you don't have to specify all. You can specify one or more. For example, you can specify an existing storage account and the workspace will create the other resources.
+
+The output of the workspace creation command is similar to the following JSON. You can use the output values to locate the created resources or parse them as input to subsequent CLI steps.
+
+```json
+{
+ "applicationInsights": "/subscriptions/<service-GUID>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<application-insight-name>",
+ "containerRegistry": "/subscriptions/<service-GUID>/resourcegroups/<resource-group-name>/providers/microsoft.containerregistry/registries/<acr-name>",
+ "creationTime": "2019-08-30T20:24:19.6984254+00:00",
+ "description": "",
+ "friendlyName": "<workspace-name>",
+ "id": "/subscriptions/<service-GUID>/resourceGroups/<resource-group-name>/providers/Microsoft.MachineLearningServices/workspaces/<workspace-name>",
+ "identityPrincipalId": "<GUID>",
+ "identityTenantId": "<GUID>",
+ "identityType": "SystemAssigned",
+ "keyVault": "/subscriptions/<service-GUID>/resourcegroups/<resource-group-name>/providers/microsoft.keyvault/vaults/<key-vault-name>",
+ "location": "<location>",
+ "name": "<workspace-name>",
+ "resourceGroup": "<resource-group-name>",
+ "storageAccount": "/subscriptions/<service-GUID>/resourcegroups/<resource-group-name>/providers/microsoft.storage/storageaccounts/<storage-account-name>",
+ "type": "Microsoft.MachineLearningServices/workspaces",
+ "workspaceid": "<GUID>"
+}
+
+```
+
+## Advanced configurations
+### Configure workspace for private network connectivity
+
+Dependent on your use case and organizational requirements, you can choose to configure Azure Machine Learning using private network connectivity. You can use the Azure CLI to deploy a workspace and a Private link endpoint for the workspace resource. For more information on using a private endpoint and virtual network (VNet) with your workspace, see [Virtual network isolation and privacy overview](../how-to-network-security-overview.md). For complex resource configurations, also refer to template based deployment options including [Azure Resource Manager](../how-to-create-workspace-template.md).
+
+If you want to restrict workspace access to a virtual network, you can use the following parameters as part of the `az ml workspace create` command or use the `az ml workspace private-endpoint` commands.
+
+```azurecli-interactive
+az ml workspace create -w <workspace-name>
+ -g <resource-group-name>
+ --pe-name "<pe name>"
+ --pe-auto-approval "<pe-autoapproval>"
+ --pe-resource-group "<pe name>"
+ --pe-vnet-name "<pe name>"
+ --pe-subnet-name "<pe name>"
+```
+
+* `--pe-name`: The name of the private endpoint that is created.
+* `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
+* `--pe-resource-group`: The resource group to create the private endpoint in. Must be the same group that contains the virtual network.
+* `--pe-vnet-name`: The existing virtual network to create the private endpoint in.
+* `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`.
+
+For more information on how to use these commands, see the [CLI reference pages](/cli/azure/ml(v1)/workspace).
+
+### Customer-managed key and high business impact workspace
+
+By default, metadata for the workspace is stored in an Azure Cosmos DB instance that Microsoft maintains. This data is encrypted using Microsoft-managed keys. Instead of using the Microsoft-managed key, you can also provide your own key. Doing so creates an extra set of resources in your Azure subscription to store your data.
+
+To learn more about the resources that are created when you bring your own key for encryption, see [Data encryption with Azure Machine Learning](../concept-data-encryption.md#azure-cosmos-db).
+
+Use the `--cmk-keyvault` parameter to specify the Azure Key Vault that contains the key, and `--resource-cmk-uri` to specify the resource ID and uri of the key within the vault.
+
+To [limit the data that Microsoft collects](../concept-data-encryption.md#encryption-at-rest) on your workspace, you can additionally specify the `--hbi-workspace` parameter.
+
+```azurecli-interactive
+az ml workspace create -w <workspace-name>
+ -g <resource-group-name>
+ --cmk-keyvault "<cmk keyvault name>"
+ --resource-cmk-uri "<resource cmk uri>"
+ --hbi-workspace
+```
+
+> [!NOTE]
+> Authorize the __Machine Learning App__ (in Identity and Access Management) with contributor permissions on your subscription to manage the data encryption additional resources.
+
+> [!NOTE]
+> Azure Cosmos DB is __not__ used to store information such as model performance, information logged by experiments, or information logged from your model deployments. For more information on monitoring these items, see the [Monitoring and logging](concept-azure-machine-learning-architecture.md) section of the architecture and concepts article.
+
+> [!IMPORTANT]
+> Selecting high business impact can only be done when creating a workspace. You cannot change this setting after workspace creation.
+
+For more information on customer-managed keys and high business impact workspace, see [Enterprise security for Azure Machine Learning](../concept-data-encryption.md#encryption-at-rest).
+
+## Using the CLI to manage workspaces
+
+### Get workspace information
+
+To get information about a workspace, use the following command:
+
+```azurecli-interactive
+az ml workspace show -w <workspace-name> -g <resource-group-name>
+```
+
+### Update a workspace
+
+To update a workspace, use the following command:
+
+```azurecli-interactive
+az ml workspace update -n <workspace-name> -g <resource-group-name>
+```
+
+### Sync keys for dependent resources
+
+If you change access keys for one of the resources used by your workspace, it takes around an hour for the workspace to synchronize to the new key. To force the workspace to sync the new keys immediately, use the following command:
+
+```azurecli-interactive
+az ml workspace sync-keys -w <workspace-name> -g <resource-group-name>
+```
+
+For more information on changing keys, see [Regenerate storage access keys](../how-to-change-storage-access-key.md).
+
+### Delete a workspace
++
+To delete a workspace after it's no longer needed, use the following command:
+
+```azurecli-interactive
+az ml workspace delete -w <workspace-name> -g <resource-group-name>
+```
+
+> [!IMPORTANT]
+> Deleting a workspace does not delete the application insight, storage account, key vault, or container registry used by the workspace.
+
+You can also delete the resource group, which deletes the workspace and all other Azure resources in the resource group. To delete the resource group, use the following command:
+
+```azurecli-interactive
+az group delete -g <resource-group-name>
+```
+
+If you accidentally deleted your workspace, are still able to retrieve your notebooks. For more information, see the [workspace deletion](../how-to-high-availability-machine-learning.md#workspace-deletion) section of the disaster recovery article.
+
+## Troubleshooting
+
+### Resource provider errors
++
+### Moving the workspace
+
+> [!WARNING]
+> Moving your Azure Machine Learning workspace to a different subscription, or moving the owning subscription to a new tenant, is not supported. Doing so may cause errors.
+
+### Deleting the Azure Container Registry
+
+The Azure Machine Learning workspace uses Azure Container Registry (ACR) for some operations. It will automatically create an ACR instance when it first needs one.
++
+## Next steps
+
+For more information on the Azure CLI extension for machine learning, see the [az ml](/cli/azure/ml(v1)) (v1) documentation.
+
+To check for problems with your workspace, see [How to use workspace diagnostics](../how-to-workspace-diagnostic-api.md).
+
+To learn how to move a workspace to a new Azure subscription, see [How to move a workspace](../how-to-move-workspace.md).
+
+For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](../concept-vulnerability-management.md).
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-inferencing-vnet.md
To add AKS in a virtual network to your workspace, use the following steps:
1. When you deploy a model as a web service to AKS, a scoring endpoint is created to handle inferencing requests. Make sure that the network security group (NSG) that controls the virtual network has an inbound security rule enabled for the IP address of the scoring endpoint if you want to call it from outside the virtual network.
- To find the IP address of the scoring endpoint, look at the scoring URI for the deployed service. For information on viewing the scoring URI, see [Consume a model deployed as a web service](../how-to-consume-web-service.md#connection-information).
+ To find the IP address of the scoring endpoint, look at the scoring URI for the deployed service. For information on viewing the scoring URI, see [Consume a model deployed as a web service](how-to-consume-web-service.md#connection-information).
> [!IMPORTANT] > Keep the default outbound rules for the NSG. For more information, see the default security rules in [Security groups](../../virtual-network/network-security-groups-overview.md#default-security-rules).
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-web-service.md
aks_target.update(update_config)
## Next steps Learn how to:
-+ [Consume a machine learning model deployed as a web service](../how-to-consume-web-service.md)
++ [Consume a machine learning model deployed as a web service](how-to-consume-web-service.md) + [Virtual network isolation and privacy overview](../how-to-network-security-overview.md) + [How to use your workspace with a custom DNS server](../how-to-custom-dns.md)
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
can require two-factor authentication, or allow sign in only from managed device
## Next steps * [How to use secrets in training](../how-to-use-secrets-in-runs.md).
-* [How to configure authentication for models deployed as a web service](../how-to-authenticate-web-service.md).
-* [Consume an Azure Machine Learning model deployed as a web service](../how-to-consume-web-service.md).
+* [How to configure authentication for models deployed as a web service](how-to-authenticate-web-service.md).
+* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md).
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-monitor-analyze-runs.md
This article shows how to do the following tasks:
> [!TIP] > If you're looking for information on monitoring the Azure Machine Learning service and associated Azure services, see [How to monitor Azure Machine Learning](../monitor-azure-machine-learning.md).
-> If you're looking for information on monitoring models deployed as web services, see [Collect model data](../how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
+> If you're looking for information on monitoring models deployed as web services, see [Collect model data](how-to-enable-data-collection.md) and [Monitor with Application Insights](../how-to-enable-app-insights.md).
## Prerequisites
machine-learning How To Troubleshoot Deployment Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment-local.md
+
+ Title: Troubleshooting local model deployment
+
+description: Try a local model deployment as a first step in troubleshooting model deployment errors.
+++++ Last updated : 08/15/2022++
+#Customer intent: As a data scientist, I want to try a local deployment so that I can troubleshoot my model deployment problems.
++
+# Troubleshooting with a local model deployment
+
+Try a local model deployment as a first step in troubleshooting deployment to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). Using a local web service makes it easier to spot and fix common Azure Machine Learning Docker web service deployment errors.
+
+## Prerequisites
+
+* An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* Option A (**Recommended**) - Debug locally on Azure Machine Learning Compute Instance
+ * An Azure Machine Learning Workspace with [compute instance](how-to-deploy-local-container-notebook-vm.md) running
+* Option B - Debug locally on your compute
+ * The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
+ * The [Azure CLI](/cli/azure/install-azure-cli).
+ * The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
+ * Have a working Docker installation on your local system.
+ * To verify your Docker installation, use the command `docker run hello-world` from a terminal or command prompt. For information on installing Docker, or troubleshooting Docker errors, see the [Docker Documentation](https://docs.docker.com/).
+* Option C - Enable local debugging with Azure Machine Learning inference HTTP server.
+ * The Azure Machine Learning inference HTTP server [(preview)](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) is a Python package that allows you to easily validate your entry script (`score.py`) in a local development environment. If there's a problem with the scoring script, the server will return an error. It will also return the location where the error occurred.
+ * The server can also be used when creating validation gates in a continuous integration and deployment pipeline. For example, start the server with thee candidate script and run the test suite against the local endpoint.
+
+## Azure Machine learning inference HTTP server
+
+The local inference server allows you to quickly debug your entry script (`score.py`). In case the underlying score script has a bug, the server will fail to initialize or serve the model. Instead, it will throw an exception & the location where the issues occurred. [Learn more about Azure Machine Learning inference HTTP Server](../how-to-inference-server-http.md)
+
+1. Install the `azureml-inference-server-http` package from the [pypi](https://pypi.org/) feed:
+
+ ```bash
+ python -m pip install azureml-inference-server-http
+ ```
+
+2. Start the server and set `score.py` as the entry script:
+
+ ```bash
+ azmlinfsrv --entry_script score.py
+ ```
+
+3. Send a scoring request to the server using `curl`:
+
+ ```bash
+ curl -p 127.0.0.1:5001/score
+ ```
+> [!NOTE]
+> [**Learn frequently asked questions**](../how-to-inference-server-http.md#frequently-asked-questions) about Azure machine learning Inference HTTP server.
+
+## Debug locally
+
+You can find a sample [local deployment notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/deploy-to-local/register-model-deploy-local.ipynb) in the [MachineLearningNotebooks](https://github.com/Azure/MachineLearningNotebooks) repo to explore a runnable example.
+
+> [!WARNING]
+> Local web service deployments are not supported for production scenarios.
+
+To deploy locally, modify your code to use `LocalWebservice.deploy_configuration()` to create a deployment configuration. Then use `Model.deploy()` to deploy the service. The following example deploys a model (contained in the model variable) as a local web service:
++
+```python
+from azureml.core.environment import Environment
+from azureml.core.model import InferenceConfig, Model
+from azureml.core.webservice import LocalWebservice
++
+# Create inference configuration based on the environment definition and the entry script
+myenv = Environment.from_conda_specification(name="env", file_path="myenv.yml")
+inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
+# Create a local deployment, using port 8890 for the web service endpoint
+deployment_config = LocalWebservice.deploy_configuration(port=8890)
+# Deploy the service
+service = Model.deploy(
+ ws, "mymodel", [model], inference_config, deployment_config)
+# Wait for the deployment to complete
+service.wait_for_deployment(True)
+# Display the port that the web service is available on
+print(service.port)
+```
+
+If you are defining your own conda specification YAML, list azureml-defaults version >= 1.0.45 as a pip dependency. This package is needed to host the model as a web service.
+
+At this point, you can work with the service as normal. The following code demonstrates sending data to the service:
+
+```python
+import json
+
+test_sample = json.dumps({'data': [
+ [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
+ [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
+]})
+
+test_sample = bytes(test_sample, encoding='utf8')
+
+prediction = service.run(input_data=test_sample)
+print(prediction)
+```
+
+For more information on customizing your Python environment, see [Create and manage environments for training and deployment](how-to-use-environments.md).
+
+### Update the service
+
+During local testing, you may need to update the `score.py` file to add logging or attempt to resolve any problems that you've discovered. To reload changes to the `score.py` file, use `reload()`. For example, the following code reloads the script for the service, and then sends data to it. The data is scored using the updated `score.py` file:
+
+> [!IMPORTANT]
+> The `reload` method is only available for local deployments. For information on updating a deployment to another compute target, see [how to update your webservice](how-to-deploy-update-web-service.md).
+
+```python
+service.reload()
+print(service.run(input_data=test_sample))
+```
+
+> [!NOTE]
+> The script is reloaded from the location specified by the `InferenceConfig` object used by the service.
+
+To change the model, Conda dependencies, or deployment configuration, use [update()](/python/api/azureml-core/azureml.core.webservice%28class%29#update--args-). The following example updates the model used by the service:
+
+```python
+service.update([different_model], inference_config, deployment_config)
+```
+
+### Delete the service
+
+To delete the service, use [delete()](/python/api/azureml-core/azureml.core.webservice%28class%29#delete--).
+
+### <a id="dockerlog"></a> Inspect the Docker log
+
+You can print out detailed Docker engine log messages from the service object. You can view the log for ACI, AKS, and Local deployments. The following example demonstrates how to print the logs.
+
+```python
+# if you already have the service object handy
+print(service.get_logs())
+
+# if you only know the name of the service (note there might be multiple services with the same name but different version number)
+print(ws.webservices['mysvc'].get_logs())
+```
+
+If you see the line `Booting worker with pid: <pid>` occurring multiple times in the logs, it means, there isn't enough memory to start the worker.
+You can address the error by increasing the value of `memory_gb` in `deployment_config`
+
+## Next steps
+
+Learn more about deployment:
+
+* [How to troubleshoot remote deployments](how-to-troubleshoot-deployment.md)
+* [Azure Machine Learning inference HTTP Server](../how-to-inference-server-http.md)
+* [How to run and debug experiments locally](../how-to-debug-visual-studio-code.md)
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment.md
+
+ Title: Troubleshooting remote model deployment
+
+description: Learn how to work around, solve, and troubleshoot some common Docker deployment errors with Azure Kubernetes Service and Azure Container Instances.
+++ Last updated : 08/15/2022++++
+#Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
++
+# Troubleshooting remote model deployment
+
+Learn how to troubleshoot and solve, or work around, common errors you may encounter when deploying a model to Azure Container Instances (ACI) and Azure Kubernetes Service (AKS) using Azure Machine Learning.
+
+> [!NOTE]
+> If you are deploying a model to Azure Kubernetes Service (AKS), we advise you enable [Azure Monitor](/azure/azure-monitor/containers/container-insights-enable-existing-clusters) for that cluster. This will help you understand overall cluster health and resource usage. You might also find the following resources useful:
+>
+> * [Check for Resource Health events impacting your AKS cluster](/azure/aks/aks-resource-health)
+> * [Azure Kubernetes Service Diagnostics](/azure/aks/concepts-diagnostics)
+>
+> If you are trying to deploy a model to an unhealthy or overloaded cluster, it is expected to experience issues. If you need help troubleshooting AKS cluster problems please contact AKS Support.
+
+## Prerequisites
+
+* An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
+* The [Azure CLI](/cli/azure/install-azure-cli).
+* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
+
+## Steps for Docker deployment of machine learning models
+
+When you deploy a model to non-local compute in Azure Machine Learning, the following things happen:
+
+1. The Dockerfile you specified in your Environments object in your InferenceConfig is sent to the cloud, along with the contents of your source directory
+1. If a previously built image isn't available in your container registry, a new Docker image is built in the cloud and stored in your workspace's default container registry.
+1. The Docker image from your container registry is downloaded to your compute target.
+1. Your workspace's default Blob store is mounted to your compute target, giving you access to registered models
+1. Your web server is initialized by running your entry script's `init()` function
+1. When your deployed model receives a request, your `run()` function handles that request
+
+The main difference when using a local deployment is that the container image is built on your local machine, which is why you need to have Docker installed for a local deployment.
+
+Understanding these high-level steps should help you understand where errors are happening.
+
+## Get deployment logs
+
+The first step in debugging errors is to get your deployment logs. First, follow the [instructions here to connect to your workspace](how-to-deploy-and-where.md#connect-to-your-workspace).
+
+# [Azure CLI](#tab/azcli)
++
+To get the logs from a deployed webservice, do:
+
+```azurecli
+az ml service get-logs --verbose --workspace-name <my workspace name> --name <service name>
+```
+
+# [Python](#tab/python)
++
+Assuming you have an object of type `azureml.core.Workspace` called `ws`, you can do the following:
+
+```python
+print(ws.webservices)
+
+# Choose the webservice you are interested in
+
+from azureml.core import Webservice
+
+service = Webservice(ws, '<insert name of webservice>')
+print(service.get_logs())
+```
+++
+## Debug locally
+
+If you have problems when deploying a model to ACI or AKS, deploy it as a local web service. Using a local web service makes it easier to troubleshoot problems. To troubleshoot a deployment locally, see the [local troubleshooting article](how-to-troubleshoot-deployment-local.md).
+
+## Azure Machine learning inference HTTP server
+
+The local inference server allows you to quickly debug your entry script (`score.py`). In case the underlying score script has a bug, the server will fail to initialize or serve the model. Instead, it will throw an exception & the location where the issues occurred. [Learn more about Azure Machine Learning inference HTTP Server](../how-to-inference-server-http.md)
+
+1. Install the `azureml-inference-server-http` package from the [pypi](https://pypi.org/) feed:
+
+ ```bash
+ python -m pip install azureml-inference-server-http
+ ```
+
+2. Start the server and set `score.py` as the entry script:
+
+ ```bash
+ azmlinfsrv --entry_script score.py
+ ```
+
+3. Send a scoring request to the server using `curl`:
+
+ ```bash
+ curl -p 127.0.0.1:5001/score
+ ```
+> [!NOTE]
+> [**Learn frequently asked questions**](../how-to-inference-server-http.md#frequently-asked-questions) about Azure machine learning Inference HTTP server.
+
+## Container can't be scheduled
+
+When deploying a service to an Azure Kubernetes Service compute target, Azure Machine Learning will attempt to schedule the service with the requested amount of resources. If there are no nodes available in the cluster with the appropriate amount of resources after 5 minutes, the deployment will fail. The failure message is `Couldn't Schedule because the kubernetes cluster didn't have available resources after trying for 00:05:00`. You can address this error by either adding more nodes, changing the SKU of your nodes, or changing the resource requirements of your service.
+
+The error message will typically indicate which resource you need more of - for instance, if you see an error message indicating `0/3 nodes are available: 3 Insufficient nvidia.com/gpu` that means that the service requires GPUs and there are three nodes in the cluster that don't have available GPUs. This could be addressed by adding more nodes if you're using a GPU SKU, switching to a GPU enabled SKU if you aren't or changing your environment to not require GPUs.
+
+## Service launch fails
+
+After the image is successfully built, the system attempts to start a container using your deployment configuration. As part of container starting-up process, the `init()` function in your scoring script is invoked by the system. If there are uncaught exceptions in the `init()` function, you might see **CrashLoopBackOff** error in the error message.
+
+Use the info in the [Inspect the Docker log](how-to-troubleshoot-deployment-local.md#dockerlog) article.
+
+## Container azureml-fe-aci launch fails
+
+When deploying a service to an Azure Container Instance compute target, Azure Machine Learning attempts to create a front-end container that has the name `azureml-fe-aci` for the inference request. If `azureml-fe-aci` crashes, you can see logs by running `az container logs --name MyContainerGroup --resource-group MyResourceGroup --subscription MySubscription --container-name azureml-fe-aci`. You can follow the error message in the logs to make the fix.
+
+The most common failure for `azureml-fe-aci` is that the provided SSL certificate or key is invalid.
+
+## Function fails: get_model_path()
+
+Often, in the `init()` function in the scoring script, [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-noneworkspace-none-) function is called to locate a model file or a folder of model files in the container. If the model file or folder can't be found, the function fails. The easiest way to debug this error is to run the below Python code in the Container shell:
++
+```python
+from azureml.core.model import Model
+import logging
+logging.basicConfig(level=logging.DEBUG)
+print(Model.get_model_path(model_name='my-best-model'))
+```
+
+This example prints the local path (relative to `/var/azureml-app`) in the container where your scoring script is expecting to find the model file or folder. Then you can verify if the file or folder is indeed where it's expected to be.
+
+Setting the logging level to DEBUG may cause additional information to be logged, which may be useful in identifying the failure.
+
+## Function fails: run(input_data)
+
+If the service is successfully deployed, but it crashes when you post data to the scoring endpoint, you can add error catching statement in your `run(input_data)` function so that it returns detailed error message instead. For example:
+
+```python
+def run(input_data):
+ try:
+ data = json.loads(input_data)['data']
+ data = np.array(data)
+ result = model.predict(data)
+ return json.dumps({"result": result.tolist()})
+ except Exception as e:
+ result = str(e)
+ # return error message back to the client
+ return json.dumps({"error": result})
+```
+
+**Note**: Returning error messages from the `run(input_data)` call should be done for debugging purpose only. For security reasons, you shouldn't return error messages this way in a production environment.
+
+## HTTP status code 502
+
+A 502 status code indicates that the service has thrown an exception or crashed in the `run()` method of the score.py file. Use the information in this article to debug the file.
+
+## HTTP status code 503
+
+Azure Kubernetes Service deployments support autoscaling, which allows replicas to be added to support extra load. The autoscaler is designed to handle **gradual** changes in load. If you receive large spikes in requests per second, clients may receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers.
+
+Decisions to scale up/down is based off of utilization of the current container replicas. The number of replicas that are busy (processing a request) divided by the total number of current replicas is the current utilization. If this number exceeds `autoscale_target_utilization`, then more replicas are created. If it's lower, then replicas are reduced. Decisions to add replicas are eager and fast (around 1 second). Decisions to remove replicas are conservative (around 1 minute). By default, autoscaling target utilization is set to **70%**, which means that the service can handle spikes in requests per second (RPS) of **up to 30%**.
+
+There are two things that can help prevent 503 status codes:
+
+> [!TIP]
+> These two approaches can be used individually or in combination.
+
+* Change the utilization level at which autoscaling creates new replicas. You can adjust the utilization target by setting the `autoscale_target_utilization` to a lower value.
+
+ > [!IMPORTANT]
+ > This change does not cause replicas to be created *faster*. Instead, they are created at a lower utilization threshold. Instead of waiting until the service is 70% utilized, changing the value to 30% causes replicas to be created when 30% utilization occurs.
+
+ If the web service is already using the current max replicas and you're still seeing 503 status codes, increase the `autoscale_max_replicas` value to increase the maximum number of replicas.
+
+* Change the minimum number of replicas. Increasing the minimum replicas provides a larger pool to handle the incoming spikes.
+
+ To increase the minimum number of replicas, set `autoscale_min_replicas` to a higher value. You can calculate the required replicas by using the following code, replacing values with values specific to your project:
+
+ ```python
+ from math import ceil
+ # target requests per second
+ targetRps = 20
+ # time to process the request (in seconds)
+ reqTime = 10
+ # Maximum requests per container
+ maxReqPerContainer = 1
+ # target_utilization. 70% in this example
+ targetUtilization = .7
+
+ concurrentRequests = targetRps * reqTime / targetUtilization
+
+ # Number of container replicas
+ replicas = ceil(concurrentRequests / maxReqPerContainer)
+ ```
+
+ > [!NOTE]
+ > If you receive request spikes larger than the new minimum replicas can handle, you may receive 503s again. For example, as traffic to your service increases, you may need to increase the minimum replicas.
+
+For more information on setting `autoscale_target_utilization`, `autoscale_max_replicas`, and `autoscale_min_replicas` for, see the [AksWebservice](/python/api/azureml-core/azureml.core.webservice.akswebservice) module reference.
+
+## HTTP status code 504
+
+A 504 status code indicates that the request has timed out. The default timeout is 1 minute.
+
+You can increase the timeout or try to speed up the service by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, use the information in this article to debug the score.py file. The code may be in a non-responsive state or an infinite loop.
+
+## Other error messages
+
+Take these actions for the following errors:
+
+|Error | Resolution |
+|||
+| 409 conflict error| When an operation is already in progress, any new operation on that same web service will respond with 409 conflict error. For example, If create or update web service operation is in progress and if you trigger a new Delete operation it will throw an error. |
+|Image building failure when deploying web service | Add "pynacl==1.2.1" as a pip dependency to Conda file for image configuration |
+|`['DaskOnBatch:context_managers.DaskOnBatch', 'setup.py']' died with <Signals.SIGKILL: 9>` | Change the SKU for VMs used in your deployment to one that has more memory. |
+|FPGA failure | You won't be able to deploy models on FPGAs until you've requested and been approved for FPGA quota. To request access, fill out the quota request form: https://aka.ms/aml-real-time-ai |
++
+## Advanced debugging
+
+You may need to interactively debug the Python code contained in your model deployment. For example, if the entry script is failing and the reason can't be determined by extra logging. By using Visual Studio Code and the debugpy, you can attach to the code running inside the Docker container.
+
+For more information, visit the [interactive debugging in VS Code guide](../how-to-debug-visual-studio-code.md#debug-and-troubleshoot-deployments).
+
+## [Model deployment user forum](/answers/topics/azure-machine-learning-inference.html)
+
+## Next steps
+
+Learn more about deployment:
+
+* [How to deploy and where](how-to-deploy-and-where.md)zzs
+* [How to run and debug experiments locally](../how-to-debug-visual-studio-code.md)
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
The [MLflow with Azure ML notebooks](https://github.com/Azure/MachineLearningNot
## Next steps * [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
-* Monitor your production models for [data drift](../how-to-enable-data-collection.md).
+* Monitor your production models for [data drift](how-to-enable-data-collection.md).
* [Track Azure Databricks runs with MLflow](../how-to-use-mlflow-azure-databricks.md). * [Manage your models](concept-model-management-and-deployment.md).
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Azure Database for MariaDB
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
Title: Generate a SAS URI for a VM image
-description: Generate a shared access signature (SAS) URI for a virtual hard disks (VHD) in Azure Marketplace.
+description: Generate a shared access signature (SAS) URI for a virtual hard disk (VHD) in Azure Marketplace.
Previously updated : 08/15/2022 Last updated : 08/16/2022 # Generate a SAS URI for a VM image > [!NOTE]
-> You don't need a SAS URI to publish your VM. You can simply share an image in Partner Center. Refer to [Create a virtual machine using an approved base](azure-vm-use-approved-base.md) or [Create a virtual machine using your own image](azure-vm-use-own-image.md) instructions.
+> A Shared access signature (SAS) URI can be used to publish your virtual machine (VM). Alternatively, you can share an image in Partner Center via Azure compute gallery. Refer to [Create a virtual machine using an approved base](azure-vm-use-approved-base.md) or [Create a virtual machine using your own image](azure-vm-use-own-image.md) for further instructions.
-Generating SAS URIs for your VHDs has these requirements:
+Before getting started, you will need the following:
-- Only List and Read permissions are required. Don't provide Write or Delete access.-- The duration for access (expiry date) should be a minimum of three weeks from when the SAS URI is created.-- To protect against UTC time changes, set the start date to one day before the current date. For example, if the current date is June 16, 2020, select 6/15/2020.
+- A virtual machine
+- A [storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) with a container for storing the virtual hard drive (VHD)
+- Your [storage account key](/azure/storage/common/storage-account-keys-manage?tabs=azure-portal#view-account-access-keys)
-## Extract vhd from a VM
+## Extract VHD from a VM
> [!NOTE]
-> You can skip this step if you already have a vhd uploaded in a Storage Account.
+> You can skip this step if you already have a VHD uploaded to a storage account.
-To extract the vhd from your VM, you need to take a snapshot of your VM disk and extract vhd from the snapshot.
+To extract the VHD from your VM, you need to first take a snapshot of your VM disk and then extract the VHD from the snapshot into your storage account.
-Start by taking a snapshot of the VM disk:
+### Take a snapshot of your VM disk
-1. Sign in to the Azure portal.
-2. Starting at the upper-left, select Create a resource, then search for and select Snapshot.
-3. In the Snapshot blade, select Create.
-4. Enter a Name for the snapshot.
-5. Select an existing resource group or enter the name for a new one.
-6. For Source disk, select the managed disk to snapshot.
-7. Select the Account type to use to store the snapshot. Use Standard HDD unless you need it stored on a high performing SSD.
-8. Select Create.
+1. Sign in to the [Azure portal](https://www.portal.azure.com/).
+1. Select **Create a resource**, then search for and select **Snapshot**.
+1. In the Snapshot blade, select **Create**.
+1. Select the **Subscription**. Select an existing resource group within the selected subscription or **Create new** and enter the name for a new resource group to be created. This is the resource group the snapshot will be associated to.
+1. Enter a **Name** for the snapshot.
+1. For **Source type**, select **Disk**.
+1. Select the **Source subscription**, which is the subscription that contains the VM disk. This may be different from the destination subscription of the new snapshot.
+1. For **Source disk**, select the managed disk to snapshot.
+1. For the **Storage type**, select **Standard HDD** unless you need it stored on a high performing SSD.
+1. Select **Review + Create**. Upon successful validation, select **Create**.
-### Extract the VHD
+### Extract the VHD into your storage account
-Use the following script to export the snapshot into a VHD in your storage account.
+Use the following script to export the snapshot into a VHD in your storage account. For each of parameters, insert your information accordingly.
```azurecli #Provide the subscription Id where the snapshot is created
-$subscriptionId=yourSubscriptionId
+subscriptionId=yourSubscriptionId
#Provide the name of your resource group where the snapshot is created
-$resourceGroupName=myResourceGroupName
+resourceGroupName=myResourceGroupName
#Provide the snapshot name
-$snapshotName=mySnapshot
+snapshotName=mySnapshot
#Provide Shared Access Signature (SAS) expiry duration in seconds (such as 3600) #Know more about SAS here: https://docs.microsoft.com/azure/storage/storage-dotnet-shared-access-signature-part-1
-$sasExpiryDuration=3600
+sasExpiryDuration=3600
-#Provide storage account name where you want to copy the underlying VHD file. Currently, only general purpose v1 storage is supported.
-$storageAccountName=mystorageaccountname
+#Provide storage account name where you want to copy the underlying VHD file.
+storageAccountName=mystorageaccountname
#Name of the storage container where the downloaded VHD will be stored.
-$storageContainerName=mystoragecontainername
+storageContainerName=mystoragecontainername
-#Provide the key of the storage account where you want to copy the VHD
-$storageAccountKey=mystorageaccountkey
+#Provide the access key for the storage account that you want to copy the VHD to.
+storageAccountKey=mystorageaccountkey
#Give a name to the destination VHD file to which the VHD will be copied.
-$destinationVHDFileName=myvhdfilename.vhd
+destinationVHDFileName=myvhdfilename.vhd
az account set --subscription $subscriptionId
-$sas=$(az snapshot grant-access --resource-group $resourceGroupName --name $snapshotName --duration-in-seconds $sasExpiryDuration --query [accessSas] -o tsv)
+sas=$(az snapshot grant-access --resource-group $resourceGroupName --name $snapshotName --duration-in-seconds $sasExpiryDuration --query [accessSas] -o tsv)
az storage blob copy start --destination-blob $destinationVHDFileName --destination-container $storageContainerName --account-name $storageAccountName --account-key $storageAccountKey --source-uri $sas ```
-### Script explanation
-This script uses following commands to generate the SAS URI for a snapshot and copies the underlying VHD to a storage account using the SAS URI. Each command in the table links to command specific documentation.
+This script above uses the following commands to generate the SAS URI for a snapshot and copies the underlying VHD to a storage account using the SAS URI.
|Command |Notes | |||
-| az disk grant-access | Generates read-only SAS that is used to copy the underlying VHD file to a storage account or download it to on-premises |
-| az storage blob copy start | Copies a blob asynchronously from one storage account to another. Use az storage blob show to check the status of the new blob. |
+| az disk grant-access | Generates read-only SAS that is used to copy the underlying VHD file to a storage account or download it to on-premises. |
+| az storage blob copy start | Copies a blob asynchronously from one storage account to another. Use [az storage blob show](/cli/azure/storage/blob#az-storage-blob-show) to check the status of the new blob. |
-## Generate the SAS address
+## Generate the SAS URI
-There are two common tools used to create a SAS address (URL):
+There are two common tools used to create a SAS address (URI):
-1. **Azure Storage Explorer** ΓÇô Available on the Azure portal.
-2. **Azure CLI** ΓÇô Recommended for non-Windows operating systems and automated or continuous integration environments.
+- **Azure Storage browser** ΓÇô Available on the Azure portal.
+- **Azure CLI** ΓÇô Recommended for non-Windows operating systems and automated or continuous integration environments.
-### Using Tool 1: Azure Storage Explorer
+### Using Tool 1: Azure Storage browser
-1. Go to your **Storage Account**.
-2. Open **Storage Explorer**.
+1. Go to your **Storage account**.
+2. Open **Storage browser** and select **blob containers**.
+3. In your **Container**, right-click the VHD file and select **Generate SAS**.
+4. In the **Shared Access Signature** menu that appears, complete the following fields:
- :::image type="content" source="media/create-vm/storge-account-explorer.png" alt-text="Storage account window.":::
-
-3. In the **Container**, right-click the VHD file and select **Get Share Access Signature**.
-4. In the **Shared Access Signature** dialog box, complete the following fields:
-
- 1. Start time ΓÇô Permission start date for VHD access. Provide a date that is one day before the current date.
- 2. Expiry time ΓÇô Permission expiration date for VHD access. Provide a date at least three weeks beyond the current date.
- 3. Permissions ΓÇô Select the Read and List permissions.
- 4. Container-level ΓÇô Check the Generate container-level shared access signature URI check box.
-
- ![Shared access signature dialog box.](media/vm/create-sas-uri-storage-explorer.png)
-
-5. To create the associated SAS URI for this VHD, select **Create**.
-6. Copy the URI and save it to a text file in a secure location. This generated SAS URI is for container-level access. To make it specific, edit the text file to add the VHD name.
-7. Insert your VHD name after the vhds string in the SAS URI (include a forward slash). The final SAS URI should look like this:
-
- `<blob-service-endpoint-url> + /vhds/ + <vhd-name>? + <sas-connection-string>`
-
-8. Repeat these steps for each VHD in the plans you will publish.
+ 1. Permissions ΓÇô Select read permissions. DonΓÇÖt provide write or delete permissions.
+ 1. Start date/time ΓÇô This is the permission start date for VHD access. To protect against UTC time changes, provide a date that is one day before the current date. For example, if the current date is July 15, 2022, set the date as 07/14/2022.
+ 1. Expiry date/time ΓÇô This is the permission expiration date for VHD access. Provide a date at least three weeks beyond the current date.
+
+5. To create the associated SAS URI for this VHD, select **Generate SAS token and URL**.
+6. Copy the Blob SAS URL and save it to a text file in a secure location.
+7. Repeat these steps for each VHD you want to publish.
### Using Tool 2: Azure CLI
-1. Download and install [Microsoft Azure CL](/cli/azure/install-azure-cli)I. Versions are available for Windows, macOS, and various distros of Linux.
-2. Create a PowerShell file (.ps1 file extension), copy in the following code, then save it locally.
+1. In Azure CLI, run the following command.
```azurecli-interactive
- az storage container generate-sas --connection-string 'DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net' --name <container-name> --permissions rl --start '<start-date>' --expiry '<expiry-date>'
+ az storage container generate-sas --connection-string 'DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net' --name <container-name> --permissions r --start '<start-date>' --expiry '<expiry-date>'
```
-3. Edit the file to use the following parameter values. Provide dates in UTC datetime format, such as 2020-04-01T00:00:00Z.
-
- - account-name ΓÇô Your Azure storage account name.
- - account-key ΓÇô Your Azure storage account key.
- - start-date ΓÇô Permission start date for VHD access. Provide a date one day before the current date.
- - expiry-date ΓÇô Permission expiration date for VHD access. Provide a date at least three weeks after the current date.
-
- Here's an example of proper parameter values (at the time of this writing):
-
- ```azurecli-interactive
- az storage container generate-sas --connection-string 'DefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.net' --name <container-name> -- permissions rl --start '2020-04-01T00:00:00Z' --expiry '2021-04-01T00:00:00Z'
- ```
-
-4. Save the changes.
-5. Using one of the following methods, run this script with administrative privileges to create a SAS connection string for container-level access:
-
- - Run the script from the console. In Windows, right-click the script and select **Run as administrator**.
- - Run the script from a PowerShell script editor such as [Windows PowerShell ISE](/powershell/scripting/components/ise/introducing-the-windows-powershell-ise). This screen shows the creation of a SAS connection string within this editor:
-
- [![creation of a SAS connection string within the PowerShell editor](media/vm/create-sas-uri-power-shell-ise.png)](media/vm/create-sas-uri-power-shell-ise.png#lightbox)
-
-6. Copy the SAS connection string and save it to a text file in a secure location. Edit this string to add the VHD location information to create the final SAS URI.
-7. In the Azure portal, go to the blob storage that includes the VHD associated with the new URI.
-8. Copy the URL of the blob service endpoint:
+ Before running the command above, remember to insert the following parameter values.
- ![Copying the URL of the blob service endpoint.](media/vm/create-sas-uri-blob-endpoint.png)
+ | Parameter value | Description |
+ | | -- |
+ | account-name | Your Azure storage account name. |
+ | account-key | Your Azure storage account key. |
+ | container-name | Your blob container that hosts the VHD file. |
+ | start-date | This is the permission start date for VHD access. Provide a date one day before the current date. For example, if the current date is July 15, 2022, set the date as 07/14/2022. Provide dates in UTC date/time format (YYYY-MM-DDT00:00:00Z), such as 2022-04-01T00:00:00Z. |
+ | expiry-date | This is the permission expiration date for VHD access. Provide a date at least three weeks after the current date. Provide dates in UTC date/time format (YYYY-MM-DDT00:00:00Z), such as 2022-04-01T00:00:00Z. |
-9. Edit the text file with the SAS connection string from step 6. Create the complete SAS URI using this format:
+1. Copy the SAS connection string and save it to a text file in a secure location. Edit this string to add the VHD location information to create the final SAS URI.
+1. In the Azure portal, go to the blob container that includes the VHD associated with the new URI.
+1. Copy the URL of the blob service endpoint.
+1. Edit the text file with the SAS connection string from step 2. Create the complete SAS URI using this format. Be sure to insert a ΓÇ£?ΓÇ¥ between the endpoint URL and the connection string.
- `<blob-service-endpoint-url> + /vhds/ + <vhd-name>? + <sas-connection-string>`
+ `<blob-service-endpoint-url>?<sas-connection-string>`
### Virtual machine SAS failure messages
-Following are common issues encountered when working with shared access signatures (which are used to identify and share the uploaded VHDs for your solution), along with suggested resolutions.
+This table shows the common errors encountered when providing a shared access signatures (SAS) URI in Partner Center, along with suggested resolutions.
| Issue | Failure Message | Fix | | | | |
-| *Failure in copying images* | | |
-| "?" is not found in SAS URI | `Failure: Copying Images. Not able to download blob using provided SAS Uri.` | Update the SAS URI using recommended tools. |
-| "st" and "se" parameters not in SAS URI | `Failure: Copying Images. Not able to download blob using provided SAS Uri.` | Update the SAS URI with proper **Start Date** and **End Date** values. |
-| "sp=rl" not in SAS URI | `Failure: Copying Images. Not able to download blob using provided SAS Uri.` | Update the SAS URI with permissions set as `Read` and `List`. |
-| SAS URI has white spaces in VHD name | `Failure: Copying Images. Not able to download blob using provided SAS Uri.` | Update the SAS URI to remove white spaces. |
+| "?" is not found in SAS URI | `Must be a valid Azure shared access signature URI.` | Ensure that the SAS URI provided uses the proper syntax and includes the ΓÇ£?ΓÇ¥character.<br>Syntax: `<blob-service-endpoint-url>?<sas-connection-string>` |
+| "st" parameter not in SAS URI | `Specified SAS URL cannot be reached.` | Update the SAS URI with proper **Start Date** ("st") value. |
+| "se" parameter not in SAS URI | `The end date parameter (se) is required.` | Update the SAS URI with proper **End Date** (ΓÇ£seΓÇ¥) value. |
+| "sp=r" not in SAS URI | `Missing Permissions (sp) must include 'read' (r).` | Update the SAS URI with permissions set as `Read` (ΓÇ£sp=rΓÇ¥). |
| SAS URI Authorization error | `Failure: Copying Images. Not able to download blob due to authorization error.` | Review and correct the SAS URI format. Regenerate if necessary. |
-| SAS URI "st" and "se" parameters do not have full date-time specification | `Failure: Copying Images. Not able to download blob due to incorrect SAS Uri.` | SAS URI **Start Date** and **End Date** parameters (`st` and `se` substrings) must have full date-time format, such as `11-02-2017T00:00:00Z`. Shortened versions are invalid (some commands in Azure CLI may generate shortened values by default). |
+| SAS URI "st" and "se" parameters do not have full date-time specification | `The start time parameter (st) is not a valid date string.`<br>OR<br>`The end date parameter (se) is not a valid date string.` | SAS URI **Start Date** and **End Date** parameters (ΓÇ£stΓÇ¥ and ΓÇ£seΓÇ¥ substrings) must have full date-time format (YYYY-MM-DDT00:00:00Z), such as 11-02-2017T00:00:00Z. Shortened versions are invalid (some commands in Azure CLI may generate shortened values by default). |
For details, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../storage/common/storage-sas-overview.md).
For details, see [Grant limited access to Azure Storage resources using shared a
Check the SAS URI before publishing it on Partner Center to avoid any issues related to SAS URI post submission of the request. This process is optional but recommended. - The URI includes your VHD image filename, including the filename extension `.vhd`.-- `Sp=rl` appears near the middle of your URI. This string shows Read and List access is specified.
+- `Sp=r` appears near the middle of your URI. This string shows Read permission is granted.
- When `sr=c` appears, this means that container-level access is specified. - Copy and paste the URI into a browser to test-download the blob (you can cancel the operation before the download completes). ## Next steps -- If you run into issues, see [VM SAS failure messages](azure-vm-sas-failure-messages.md)-- [Sign in to Partner Center](https://go.microsoft.com/fwlink/?linkid=2165935) - [Create a virtual machine offer on Azure Marketplace](azure-vm-offer-setup.md)
+- [Sign in to Partner Center and publish your image by providing the SAS URI](https://go.microsoft.com/fwlink/?linkid=2165935)
marketplace Dynamics 365 Business Central Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-availability.md
description: Configure Dynamics 365 Business Central offer availability on Micro
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-listing.md
description: Configure Dynamics 365 Business Central offer listing details on Mi
--++ Last updated 03/15/2022
marketplace Dynamics 365 Business Central Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-setup.md
description: Create a Dynamics 365 Business Central offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Business Central Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-properties.md
description: Configure Dynamics 365 Business Central offer properties on Microso
--++ Last updated 11/24/2021
marketplace Dynamics 365 Business Central Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-supplemental-content.md
description: Set up Dynamics 365 Business Central offer supplemental content on
--++ Last updated 12/04/2021
marketplace Dynamics 365 Business Central Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-technical-configuration.md
description: Set up Dynamics 365 Business Central offer technical configuration
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-availability.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer avail
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-listing.md
description: Configure Dynamics 365 apps on Dataverse and Power App offer listin
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
description: Create a Dynamics 365 apps on Dataverse and Power Apps offer on Mic
--++ Last updated 07/18/2022
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-plans.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer plans
--++ Last updated 05/25/2022
marketplace Dynamics 365 Customer Engage Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-properties.md
description: Configure Dynamics 365 apps on Dataverse and Power Apps offer prope
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-supplemental-content.md
description: Set up DDynamics 365 apps on Dataverse and Power Apps offer supplem
--++ Last updated 12/03/2021
marketplace Dynamics 365 Customer Engage Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-customer-engage-technical-configuration.md
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-availability.md
description: Configure Dynamics 365 Operations Apps offer availability on Micros
--++ Last updated 12/04/2021
marketplace Dynamics 365 Operations Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-listing.md
description: Configure Dynamics 365 for Operations Apps offer listing details on
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-offer-setup.md
description: Create a Dynamics 365 Operations Apps offer on Microsoft AppSource
--++ Last updated 07/20/2022
marketplace Dynamics 365 Operations Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-properties.md
description: Configure Dynamics 365 Operations Apps offer properties on Microsof
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-supplemental-content.md
description: Set up Dynamics 365 Operations Apps offer supplemental content on M
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-technical-configuration.md
description: Set up Dynamics 365 Operations Apps offer technical configuration o
--++ Last updated 12/03/2021
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-operations-validation.md
description: Functionally validate a Dynamics 365 Operations Apps offer in Micro
--++ Last updated 12/03/2021
marketplace Dynamics 365 Review Publish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-review-publish.md
description: Review and publish a Dynamics 365 offer to Microsoft AppSource (Azu
--++ Last updated 08/01/2022
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-dynamics-365.md
description: Plan Dynamics 365 offers for Microsoft AppSource
--++ Last updated 06/29/2022
marketplace Orders Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/orders-dashboard.md
Previously updated : 06/06/2022 Last updated : 08/17/2022 # Orders dashboard in commercial marketplace analytics
This table displays a numbered list of the 500 top orders sorted by date of acqu
| Trial End Date | Trial End Date | The date the trial period for this order will end or has ended. | TrialEndDate | | Customer ID | Customer ID | The unique identifier assigned to a customer. A customer may have zero or more Azure Marketplace subscriptions. | CustomerID | | Billing Account ID | Billing Account ID | The identifier of the account on which billing is generated. Map **Billing Account ID** to **customerID** to connect your Payout Transaction Report with the Customer, Order, and Usage Reports. | BillingAccountId |
-| Reference Id | ReferenceId | A key to link orders having usage details in usage report. Map this field value with the value for UsageReference key in usage report. This is applicable for SaaS with custom meters and VM software reservation offer types | ReferenceId |
+| Reference ID | ReferenceId | A key to link orders having usage details in usage report. Map this field value with the value for Reference ID key in usage report. This is applicable for SaaS with custom meters and VM software reservation offer types | ReferenceId |
| PlanId | PlanId | The display name of the plan entered when the offer was created in Partner Center. Note that PlanId was originally a numeric number. | PlanId | | Auto Renew | Auto Renew | Indicates whether a subscription is due for an automatic renewal. Possible values are:<br><ul><li>TRUE: Indicates that on the TermEnd the subscription will renew automatically.</li><li>FALSE: Indicates that on the TermEnd the subscription will expire.</li><li>NULL: The product does not support renewals. Indicates that on the TermEnd the subscription will expire. This is displayed "-" on the UI</li></ul> | AutoRenew | | Not available | Event Timestamp | Indicates the timestamp of an order management event, such as an order purchase, cancelation, renewal, and so on | EventTimestamp |
+| Not available | OrderVersion | A key to indicate updated versions of an order purchase. The highest value indicates latest key | OrderVersion |
### Orders page filters
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
mysql Tutorial Archive Laravel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-archive-laravel.md
Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL Flexible Server'
-description: This tutorial explains how to build a PHP app with flexible server.
--
+ Title: 'Tutorial: Deploy a PHP (Laravel) app with Azure Database for MySQL - Flexible Server on Azure App Service'
+description: This tutorial explains how to build and deploy a PHP Laravel app with MySQL flexible server, secured within a VNet. This is an archived version. New version is at - mysql/tutorial-php-database-app
++ ms.devlang: php Previously updated : 9/21/2020 Last updated : 8/11/2020
-# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app in Azure App Service
+# Tutorial: Deploy a PHP (Laravel) and MySQL Flexible Server app on Azure App Service
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
Title: 'Tutorial: Build a PHP app with Azure Database for MySQL - Flexible Server'
-description: This tutorial explains how to build a PHP app with flexible server and deploy it on Azure App Service.
+ Title: 'Tutorial: Build a PHP (Laravel) app with Azure Database for MySQL - Flexible Server on Azure App Service'
+description: This tutorial explains how to build and deploy a PHP Laravel app with MySQL flexible server, secured within a VNet.
ms.devlang: php Previously updated : 6/21/2022 Last updated : 8/11/2020
-# Tutorial: Deploy a PHP and MySQL - Flexible Server app on Azure App Service
+# Tutorial: Build a PHP (Laravel) and MySQL Flexible Server app on Azure App Service
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
-[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system.
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure PHP app in Azure App Service that's connected to a MySQL database (using Azure Database for MySQL Flexible Server). When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux.
-This tutorial shows how to build and deploy a sample PHP application to Azure App Service, and integrate it with Azure Database for MySQL - Flexible Server on the back end.
+
+In this tutorial, you learn how to:
-In this tutorial, you'll learn how to:
> [!div class="checklist"]
->
-> * Create a MySQL flexible server
-> * Connect a PHP app to the MySQL flexible server
-> * Deploy the app to Azure App Service
+> * Create a secure-by-default PHP and MySQL app in Azure
+> * Configure connection secrets to MySQL using app settings
+> * Deploy application code using GitHub Actions
> * Update and redeploy the app-
+> * Run database migrations securely
+> * Stream diagnostic logs from Azure
+> * Manage the app in the Azure portal
## Prerequisites -- [Install Git](https://git-scm.com/).-- The [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli). - An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
-## Create an Azure Database for MySQL flexible server
-
-First, we'll provision a MySQL flexible server with public access connectivity, configure firewall rules to allow the application to access the server, and create a production database.
-
-To learn how to use private access connectivity instead and isolate app and database resources in a virtual network, see [Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network](tutorial-webapp-server-vnet.md).
-
-### Create a resource group
-
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group *rg-php-demo* using the [az group create](/cli/azure/group#az-group-create) command in the *centralus* location.
-
-1. Open command prompt.
-1. Sign in to your Azure account.
- ```azurecli-interactive
- az login
- ```
-1. Choose your Azure subscription.
- ```azurecli-interactive
- az account set -s <your-subscription-ID>
- ```
-1. Create the resource group.
- ```azurecli-interactive
- az group create --name rg-php-demo --location centralus
- ```
-
-### Create a MySQL flexible server
+## Sample application
-1. To create a MySQL flexible server with public access connectivity, run the following [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command. Replace your values for server name, admin username and password.
+To follow along with this tutorial, clone or download the sample application from the repository:
- ```azurecli-interactive
- az mysql flexible-server create \
- --name <your-mysql-server-name> \
- --resource-group rg-php-demo \
- --location centralus \
- --admin-user <your-mysql-admin-username> \
- --admin-password <your-mysql-admin-password>
- ```
+```terminal
+git clone https://github.com/Azure-Samples/laravel-tasks.git
+```
- YouΓÇÖve now created a flexible server in the CentralUS region. The server is based on the Burstable B1MS compute SKU, with 32 GB storage, a 7-day backup retention period, and configured with public access connectivity.
+If you want to run the application locally, do the following:
-1. Next, to create a firewall rule for your MySQL flexible server to allow client connections, run the following command. When both starting IP and end IP are set to 0.0.0.0, only other Azure resources (like App Services apps, VMs, AKS cluster, etc.) can connect to the flexible server.
+- In **.env**, configure the database settings (like `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`) using settings in your local MySQL database. You need a local MySQL server to run this sample.
+- From the root of the repository, start Laravel with the following commands:
- ```azurecli-interactive
- az mysql flexible-server firewall-rule create \
- --name <your-mysql-server-name> \
- --resource-group rg-php-demo \
- --rule-name AllowAzureIPs \
- --start-ip-address 0.0.0.0 \
- --end-ip-address 0.0.0.0
+ ```terminal
+ composer install
+ php artisan migrate
+ php artisan key:generate
+ php artisan serve
```
-1. To create a new MySQL production database *sampledb* to use with the PHP application, run the following command:
+## 1 - Create App Service and MySQL resources
- ```azurecli-interactive
- az mysql flexible-server db create \
- --resource-group rg-php-demo \
- --server-name <your-mysql-server-name> \
- --database-name sampledb
- ```
+In this step, you create the Azure resources. The steps used in this tutorial create an App Service and Azure Database for MySQL - Flexible Server configuration that's secure by default. For the creation process, you'll specify:
+* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
+* The **Runtime** for the app. It's where you select the version of PHP to use for your app.
+* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
-## Build your application
+Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
-For the purposes of this tutorial, we'll use a sample PHP application that displays and manages a product catalog. The application provides basic functionalities like viewing the products in the catalog, adding new products, updating existing item prices and removing products.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Create app service step 1](./includes/tutorial-php-database-app/azure-portal-create-app-mysql-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-create-app-mysql-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-php-database-app/azure-portal-create-app-mysql-1.png"::: |
+| [!INCLUDE [Create app service step 2](./includes/tutorial-php-database-app/azure-portal-create-app-mysql-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-create-app-mysql-2-240px.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-php-database-app/azure-portal-create-app-mysql-2.png"::: |
+| [!INCLUDE [Create app service step 3](./includes/tutorial-php-database-app/azure-portal-create-app-mysql-3.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-create-app-mysql-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-php-database-app/azure-portal-create-app-mysql-3.png"::: |
-To learn more about the application code, go ahead and explore the app in the [GitHub repository](https://github.com/Azure-Samples/php-mysql-app-service). To learn how to connect a PHP app to MySQL flexible server, refer [Quickstart: Connect using PHP](connect-php.md).
+## 2 - Set up database connectivity
-In this tutorial, we'll directly clone the coded sample app and learn how to deploy it on Azure App Service.
+The creation wizard generated a connection string to the database for you, but not in a format that's useable for your code yet. In this step, you create [app settings](../../app-service/configure-common.md#configure-app-settings) with the format that your app needs.
-1. To clone the sample application repository and change to the repository root, run the following commands:
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Get connection string step 1](./includes/tutorial-php-database-app/azure-portal-get-connection-string-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-get-connection-string-1-240px.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-database-app/azure-portal-get-connection-string-1.png"::: |
+| [!INCLUDE [Get connection string step 2](./includes/tutorial-php-database-app/azure-portal-get-connection-string-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-get-connection-string-2-240px.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-php-database-app/azure-portal-get-connection-string-2.png"::: |
+| [!INCLUDE [Get connection string step 3](./includes/tutorial-php-database-app/azure-portal-get-connection-string-3.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-get-connection-string-3-240px.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-php-database-app/azure-portal-get-connection-string-3.png"::: |
+| [!INCLUDE [Get connection string step 4](./includes/tutorial-php-database-app/azure-portal-get-connection-string-4.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-get-connection-string-4-240px.png" alt-text="A screenshot showing all the required app settings in the configuration page." lightbox="./media/tutorial-php-database-app/azure-portal-get-connection-string-4.png"::: |
- ```bash
- git clone https://github.com/Azure-Samples/php-mysql-app-service.git
- cd php-mysql-app-service
- ```
+## 3 - Deploy sample code
-1. Run the following command to ensure that the default branch is `main`.
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action. You'll make some changes to your codebase with Visual Studio Code directly in the browser, then let GitHub Actions deploy automatically for you.
- ```bash
- git branch -m main
- ```
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Deploy sample code step 1](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-1-240px.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-1.png"::: |
+| [!INCLUDE [Deploy sample code step 2](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-2-240px.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-2.png"::: |
+| [!INCLUDE [Deploy sample code step 3](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-3.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-3-240px.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-3.png"::: |
+| [!INCLUDE [Deploy sample code step 4](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-4.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-4-240px.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-4.png"::: |
+| [!INCLUDE [Deploy sample code step 5](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-5.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-5-240px.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-5.png"::: |
+| [!INCLUDE [Deploy sample code step 6](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-6.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-6-240px.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-6.png"::: |
+| [!INCLUDE [Deploy sample code step 7](./includes/tutorial-php-database-app/azure-portal-deploy-sample-code-7.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-7-240px.png" alt-text="A screenshot showing how to commit your changes in the Visual Studio Code browser experience." lightbox="./media/tutorial-php-database-app/azure-portal-deploy-sample-code-7.png"::: |
-## Create and configure an Azure App Service Web App
+## 4 - Generate database schema
-In Azure App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. In this step, we'll create an Azure App Service plan and an App Service web app within it, which will host the sample application.
+The creation wizard puts the MySQL database server behind a private endpoint, so it's accessible only from the virtual network. Because the App Service app is already integrated with the virtual network, the easiest way to run database migrations with your database is directly from within the App Service container.
-1. To create an App Service plan in the Free pricing tier, run the following command:
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Generate database schema step 1](./includes/tutorial-php-database-app/azure-portal-generate-db-schema-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-generate-db-schema-1-240px.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-generate-db-schema-1.png"::: |
+| [!INCLUDE [Generate database schema step 2](./includes/tutorial-php-database-app/azure-portal-generate-db-schema-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-generate-db-schema-2-240px.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output." lightbox="./media/tutorial-php-database-app/azure-portal-generate-db-schema-2.png"::: |
- ```azurecli-interactive
- az appservice plan create --name plan-php-demo \
- --resource-group rg-php-demo \
- --location centralus \
- --sku FREE --is-linux
- ```
-
-1. If you want to deploy an application to Azure web app using deployment methods like FTP or Local Git, you need to configure a deployment user with username and password credentials. After you configure your deployment user, you can take advantage of it for all your Azure App Service deployments.
+## 5 - Change site root
- ```azurecli-interactive
- az webapp deployment user set \
- --user-name <your-deployment-username> \
- --password <your-deployment-password>
- ```
+[Laravel application lifecycle](https://laravel.com/docs/8.x/lifecycle#lifecycle-overview) begins in the **/public** directory instead. The default PHP 8.0 container for App Service uses Nginx, which starts in the application's root directory. To change the site root, you need to change the Nginx configuration file in the PHP 8.0 container (*/etc/nginx/sites-available/default*). For your convenience, the sample repository contains a custom configuration file called *default*. As noted previously, you don't want to replace this file using the SSH shell, because your changes will be lost after an app restart.
-1. To create an App Service web app with PHP 8.0 runtime and to configure the Local Git deployment option to deploy your app from a Git repository on your local computer, run the following command. Replace `<your-app-name>` with a globally unique app name (valid characters are a-z, 0-9, and -).
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Change site root step 1](./includes/tutorial-php-database-app/azure-portal-change-site-root-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-change-site-root-1-240px.png" alt-text="A screenshot showing how to open the general settings tab in the configuration page of App Service." lightbox="./media/tutorial-php-database-app/azure-portal-change-site-root-1.png"::: |
+| [!INCLUDE [Change site root step 2](./includes/tutorial-php-database-app/azure-portal-change-site-root-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-change-site-root-2-240px.png" alt-text="A screenshot showing how to configure a startup command in App Service." lightbox="./media/tutorial-php-database-app/azure-portal-change-site-root-2.png"::: |
- ```azurecli-interactive
- az webapp create \
- --resource-group rg-php-demo \
- --plan plan-php-demo \
- --name <your-app-name> \
- --runtime "PHP|8.0" \
- --deployment-local-git
- ```
+## 6 - Browse to the app
- > [!IMPORTANT]
- > In the Azure CLI output, the URL of the Git remote is displayed in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL, as you'll need it later.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Browse to app step 1](./includes/tutorial-php-database-app/azure-portal-browse-app-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-browse-app-1-240px.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-browse-app-1.png"::: |
+| [!INCLUDE [Browse to app step 2](./includes/tutorial-php-database-app/azure-portal-browse-app-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-browse-app-2-240px.png" alt-text="A screenshot of the Laravel app running in App Service." lightbox="./media/tutorial-php-database-app/azure-portal-browse-app-2.png"::: |
-1. Next we'll configure the MySQL flexible server database connection settings on the Web App.
+## 7 - Stream diagnostic logs
- The `config.php` file in the sample PHP application retrieves the database connection information (server name, database name, server username and password) from environment variables using the `getenv()` function. In App Service, to set environment variables as **Application Settings** (*appsettings*), run the following command:
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Stream diagnostic logs step 1](./includes/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-1-240px.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-1.png"::: |
+| [!INCLUDE [Stream diagnostic logs step 2](./includes/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-2-240px.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-stream-diagnostic-logs-2.png"::: |
- ```azurecli-interactive
- az webapp config appsettings set \
- --name <your-app-name> \
- --resource-group rg-php-demo \
- --settings DB_HOST="<your-server-name>.mysql.database.azure.com" \
- DB_DATABASE="sampledb" \
- DB_USERNAME="<your-mysql-admin-username>" \
- DB_PASSWORD="<your-mysql-admin-password>" \
- MYSQL_SSL="true"
- ```
-
- Alternatively, you can use Service Connector to establish a connection between the App Service app and the MySQL flexible server. For more details, see [Integrate Azure Database for MySQL with Service Connector](../../service-connector/how-to-integrate-mysql.md).
+## Clean up resources
-## Deploy your application using Local Git
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
-Now, we'll deploy the sample PHP application to Azure App Service using the Local Git deployment option.
+| Instructions | Screenshot |
+|:-|--:|
+| [!INCLUDE [Remove resource group Azure portal 1](./includes/tutorial-php-database-app/azure-portal-clean-up-resources-1.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-clean-up-resources-1-240px.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-clean-up-resources-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 2](./includes/tutorial-php-database-app/azure-portal-clean-up-resources-2.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-clean-up-resources-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-clean-up-resources-2.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 3](./includes/tutorial-php-database-app/azure-portal-clean-up-resources-3.md)] | :::image type="content" source="./media/tutorial-php-database-app/azure-portal-clean-up-resources-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-php-database-app/azure-portal-clean-up-resources-3.png"::: |
-1. Since you're deploying the main branch, you need to set the default deployment branch for your App Service app to main. To set the DEPLOYMENT_BRANCH under **Application Settings**, run the following command:
+## Frequently asked questions
- ```azurecli-interactive
- az webapp config appsettings set \
- --name <your-app-name> \
- --resource-group rg-php-demo \
- --settings DEPLOYMENT_BRANCH='main'
- ```
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the MySQL database that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-mysql-database-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [Why is the GitHub Actions deployment so slow?](#why-is-the-github-actions-deployment-so-slow)
-1. Verify that you are in the application repository's root directory.
+#### How much does this setup cost?
-1. To add an Azure remote to your local Git repository, run the following command.
+Pricing for the create resources is as follows:
- **Note:** Replace `<deploymentLocalGitUrl>` with the URL of the Git remote that you saved in the **Create an App Service web app** step.
+- The App Service plan is created in **Premium V2** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The MySQL flexible server is created in **B1ms** tier and can be scaled up or down. With an Azure free account, **B1ms** tier is free for 12 months, up to the monthly limits. See [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
- ```azurecli-interactive
- git remote add azure <deploymentLocalGitUrl>
- ```
+#### How do I connect to the MySQL database that's secured behind the virtual network with other tools?
-1. To deploy your app by performing a `git push` to the Azure remote, run the following command. When Git Credential Manager prompts you for credentials, enter the deployment credentials that you created in **Configure a deployment user** step.
+- For basic access from a commmand-line tool, you can run `mysql` from the app's SSH terminal.
+- To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- You can also [integrate Azure Cloud Shell](../../cloud-shell/private-vnet.md) with the virtual network.
- ```azurecli-interactive
- git push azure main
- ```
+#### How does local app development work with GitHub Actions?
-The deployment may take a few minutes to succeed.
+Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
-## Test your application
-
-Finally, test the application by browsing to `https://<app-name>.azurewebsites.net`, and then add, view, update or delete items from the product catalog.
-
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
+```
-Congratulations! You have successfully deployed a sample PHP application to Azure App Service and integrated it with Azure Database for MySQL - Flexible Server on the back end.
+#### Why is the GitHub Actions deployment so slow?
-## Update and redeploy the app
+The autogenerated workflow file from App Service defines build-then-deploy, two-job run. Because each job runs in its own clean environment, the workflow file ensures that the `deploy` job has access to the files from the `build` job:
-To update the Azure app, make the necessary code changes, commit all the changes in Git, and then push the code changes to Azure.
+- At the end of the `build` job, [upload files as artifacts](https://docs.github.com/actions/using-workflows/storing-workflow-data-as-artifacts).
+- At the beginning of the `deploy` job, download the artifacts.
-```bash
-git add .
-git commit -m "Update Azure app"
-git push azure main
-```
+Most of the time taken by the two-job process is spent uploading and download artifacts. If you want, you can simplify the workflow file by combining the two jobs into one, which eliminates the need for the upload and download steps.
-Once the `git push` is complete, navigate to or refresh the Azure app to test the new functionality.
+## Summary
-## Clean up resources
+In this tutorial, you learned how to:
-In this tutorial, you created all the Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+> [!div class="checklist"]
+> * Create a secure-by-default PHP and MySQL app in Azure
+> * Configure connection secrets to MySQL using app settings
+> * Deploy application code using GitHub Actions
+> * Update and redeploy the app
+> * Run database migrations securely
+> * Stream diagnostic logs from Azure
+> * Manage the app in the Azure portal
-```azurecli-interactive
-az group delete --name rg-php-demo
-```
+<a name="next"></a>
## Next steps
+> [!div class="nextstepaction"]
+> [Tutorial: Map custom DNS name to your app](../../app-service/app-service-web-tutorial-custom-domain.md)
> [!div class="nextstepaction"] > [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md)- > [!div class="nextstepaction"] > [How to manage your server](how-to-manage-server-cli.md)-
mysql Tutorial Simple Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-simple-php-mysql-app.md
+
+ Title: 'Tutorial: Build a PHP app with Azure Database for MySQL - Flexible Server'
+description: This tutorial explains how to build a PHP app with flexible server and deploy it on Azure App Service.
+++++
+ms.devlang: php
Last updated : 8/11/2022+++
+# Tutorial: Deploy a PHP and MySQL - Flexible Server app on Azure App Service
++
+[Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system.
+
+This tutorial shows how to build and deploy a sample PHP application to Azure App Service, and integrate it with Azure Database for MySQL - Flexible Server on the back end. Here you'll use public access connectivity (allowed IP addresses) in the flexible server to connect to the App Service app.
+
+In this tutorial, you'll learn how to:
+> [!div class="checklist"]
+>
+> * Create a MySQL flexible server
+> * Connect a PHP app to the MySQL flexible server
+> * Deploy the app to Azure App Service
+> * Update and redeploy the app
++
+## Prerequisites
+
+- [Install Git](https://git-scm.com/).
+- The [Azure Command-Line Interface (CLI)](/cli/azure/install-azure-cli).
+- An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
+
+## Create an Azure Database for MySQL flexible server
+
+First, we'll provision a MySQL flexible server with public access connectivity, configure firewall rules to allow the application to access the server, and create a production database.
+
+To learn how to use private access connectivity instead and isolate app and database resources in a virtual network, see [Tutorial: Connect an App Services Web app to an Azure Database for MySQL flexible server in a virtual network](tutorial-webapp-server-vnet.md).
+
+### Create a resource group
+
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group *rg-php-demo* using the [az group create](/cli/azure/group#az-group-create) command in the *centralus* location.
+
+1. Open command prompt.
+1. Sign in to your Azure account.
+ ```azurecli-interactive
+ az login
+ ```
+1. Choose your Azure subscription.
+ ```azurecli-interactive
+ az account set -s <your-subscription-ID>
+ ```
+1. Create the resource group.
+ ```azurecli-interactive
+ az group create --name rg-php-demo --location centralus
+ ```
+
+### Create a MySQL flexible server
+
+1. To create a MySQL flexible server with public access connectivity, run the following [`az flexible-server create`](/cli/azure/mysql/server#az-mysql-flexible-server-create) command. Replace your values for server name, admin username and password.
+
+ ```azurecli-interactive
+ az mysql flexible-server create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --admin-user <your-mysql-admin-username> \
+ --admin-password <your-mysql-admin-password>
+ ```
+
+ YouΓÇÖve now created a flexible server in the CentralUS region. The server is based on the Burstable B1MS compute SKU, with 32 GB storage, a 7-day backup retention period, and configured with public access connectivity.
+
+1. Next, to create a firewall rule for your MySQL flexible server to allow client connections, run the following command. When both starting IP and end IP are set to 0.0.0.0, only other Azure resources (like App Services apps, VMs, AKS cluster, etc.) can connect to the flexible server.
+
+ ```azurecli-interactive
+ az mysql flexible-server firewall-rule create \
+ --name <your-mysql-server-name> \
+ --resource-group rg-php-demo \
+ --rule-name AllowAzureIPs \
+ --start-ip-address 0.0.0.0 \
+ --end-ip-address 0.0.0.0
+ ```
+
+1. To create a new MySQL production database *sampledb* to use with the PHP application, run the following command:
+
+ ```azurecli-interactive
+ az mysql flexible-server db create \
+ --resource-group rg-php-demo \
+ --server-name <your-mysql-server-name> \
+ --database-name sampledb
+ ```
++
+## Build your application
+
+For the purposes of this tutorial, we'll use a sample PHP application that displays and manages a product catalog. The application provides basic functionalities like viewing the products in the catalog, adding new products, updating existing item prices and removing products.
+
+To learn more about the application code, go ahead and explore the app in the [GitHub repository](https://github.com/Azure-Samples/php-mysql-app-service). To learn how to connect a PHP app to MySQL flexible server, refer [Quickstart: Connect using PHP](connect-php.md).
+
+In this tutorial, we'll directly clone the coded sample app and learn how to deploy it on Azure App Service.
+
+1. To clone the sample application repository and change to the repository root, run the following commands:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/php-mysql-app-service.git
+ cd php-mysql-app-service
+ ```
+
+1. Run the following command to ensure that the default branch is `main`.
+
+ ```bash
+ git branch -m main
+ ```
+
+## Create and configure an Azure App Service Web App
+
+In Azure App Service (Web Apps, API Apps, or Mobile Apps), an app always runs in an App Service plan. An App Service plan defines a set of compute resources for a web app to run. In this step, we'll create an Azure App Service plan and an App Service web app within it, which will host the sample application.
+
+1. To create an App Service plan in the Free pricing tier, run the following command:
+
+ ```azurecli-interactive
+ az appservice plan create --name plan-php-demo \
+ --resource-group rg-php-demo \
+ --location centralus \
+ --sku FREE --is-linux
+ ```
+
+1. If you want to deploy an application to Azure web app using deployment methods like FTP or Local Git, you need to configure a deployment user with username and password credentials. After you configure your deployment user, you can take advantage of it for all your Azure App Service deployments.
+
+ ```azurecli-interactive
+ az webapp deployment user set \
+ --user-name <your-deployment-username> \
+ --password <your-deployment-password>
+ ```
+
+1. To create an App Service web app with PHP 8.0 runtime and to configure the Local Git deployment option to deploy your app from a Git repository on your local computer, run the following command. Replace `<your-app-name>` with a globally unique app name (valid characters are a-z, 0-9, and -).
+
+ ```azurecli-interactive
+ az webapp create \
+ --resource-group rg-php-demo \
+ --plan plan-php-demo \
+ --name <your-app-name> \
+ --runtime "PHP|8.0" \
+ --deployment-local-git
+ ```
+
+ > [!IMPORTANT]
+ > In the Azure CLI output, the URL of the Git remote is displayed in the deploymentLocalGitUrl property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL, as you'll need it later.
+
+1. Next we'll configure the MySQL flexible server database connection settings on the Web App.
+
+ The `config.php` file in the sample PHP application retrieves the database connection information (server name, database name, server username and password) from environment variables using the `getenv()` function. In App Service, to set environment variables as **Application Settings** (*appsettings*), run the following command:
+
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DB_HOST="<your-server-name>.mysql.database.azure.com" \
+ DB_DATABASE="sampledb" \
+ DB_USERNAME="<your-mysql-admin-username>" \
+ DB_PASSWORD="<your-mysql-admin-password>" \
+ MYSQL_SSL="true"
+ ```
+
+ Alternatively, you can use Service Connector to establish a connection between the App Service app and the MySQL flexible server. For more details, see [Integrate Azure Database for MySQL with Service Connector](../../service-connector/how-to-integrate-mysql.md).
+
+## Deploy your application using Local Git
+
+Now, we'll deploy the sample PHP application to Azure App Service using the Local Git deployment option.
+
+1. Since you're deploying the main branch, you need to set the default deployment branch for your App Service app to main. To set the DEPLOYMENT_BRANCH under **Application Settings**, run the following command:
+
+ ```azurecli-interactive
+ az webapp config appsettings set \
+ --name <your-app-name> \
+ --resource-group rg-php-demo \
+ --settings DEPLOYMENT_BRANCH='main'
+ ```
+
+1. Verify that you are in the application repository's root directory.
+
+1. To add an Azure remote to your local Git repository, run the following command. Replace `<deploymentLocalGitUrl>` with the URL of the Git remote that you saved in the **Create an App Service web app** step.
+
+ ```azurecli-interactive
+ git remote add azure <deploymentLocalGitUrl>
+ ```
+
+1. To deploy your app by performing a `git push` to the Azure remote, run the following command. When Git Credential Manager prompts you for credentials, enter the deployment credentials that you created in **Configure a deployment user** step.
+
+ ```azurecli-interactive
+ git push azure main
+ ```
+
+The deployment may take a few minutes to succeed.
+
+## Test your application
+
+Finally, test the application by browsing to `https://<app-name>.azurewebsites.net`, and then add, view, update or delete items from the product catalog.
++
+Congratulations! You have successfully deployed a sample PHP application to Azure App Service and integrated it with Azure Database for MySQL - Flexible Server on the back end.
+
+## Update and redeploy the app
+
+To update the Azure app, make the necessary code changes, commit all the changes in Git, and then push the code changes to Azure.
+
+```bash
+git add .
+git commit -m "Update Azure app"
+git push azure main
+```
+
+Once the `git push` is complete, navigate to or refresh the Azure app to test the new functionality.
+
+## Clean up resources
+
+In this tutorial, you created all the Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell:
+
+```azurecli-interactive
+az group delete --name rg-php-demo
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [How to manage your resources in Azure portal](../../azure-resource-manager/management/manage-resources-portal.md)
+
+> [!div class="nextstepaction"]
+> [How to manage your server](how-to-manage-server-cli.md)
+
mysql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-webapp-server-vnet.md
az webapp config set --name mywebapp --resource-group myresourcesourcegroup --ge
## App Service Web app and MySQL flexible server in different virtual networks
-If you have created the App Service app and the MySQL flexible server in different virtual networks (VNets), you will need to use one of the following methods to establish a seamless connection:
+If you have created the App Service app and the MySQL flexible server in different virtual networks (VNets), you will need to do the following two steps to establish a seamless connection:
- **Connect the two VNets using VNet peering** (local or global). See [Connect virtual networks with virtual network peering](../../virtual-network/tutorial-connect-virtual-networks-cli.md) guide. - **Link MySQL flexible server's Private DNS zone to the web app's VNet using virtual network links.** If you use the Azure portal or the Azure CLI to create MySQL flexible servers in a VNet, a new private DNS zone is auto-provisioned in your subscription using the server name provided. Navigate to the flexible server's private DNS zone and follow the [How to link the private DNS zone to a virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) guide to set up a virtual network link.
mysql Tutorial Wordpress App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-wordpress-app-service.md
+
+ Title: 'Tutorial: Create a WordPress site on Azure App Service integrating with Azure Database for MySQL - Flexible Server'
+description: Create your first and fully managed WordPress site on Azure App Service and integrate with Azure Database for MySQL - Flexible Server in minutes.
+++++
+ms.devlang: wordpress
Last updated : 8/11/2022+++
+# Tutorial: Create a WordPress site on Azure App Service integrating with Azure Database for MySQL - Flexible Server
+
+[WordPress](https://www.wordpress.org) is an open source content management system (CMS) that can be used to create websites, blogs, and other applications. Over 40% of the web uses WordPress from blogs to major news websites.
+
+In this tutorial, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org) site to [Azure App Service on Linux](../../app-service/overview.md#app-service-on-linux) integrating with [Azure Database for MySQL - Flexible Server]() in the backend. You'll use the [WordPress on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview) to set up your site along with the database integration within minutes.
+
+## Prerequisites
+
+- An Azure subscription [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)]
++
+## Create WordPress site using Azure portal
+
+1. Browse to [https://ms.portal.azure.com/#create/WordPress.WordPress](https://ms.portal.azure.com/#create/WordPress.WordPress), or search for "WordPress" in the Azure Marketplace.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/01-portal-create-wordpress-on-app-service.png?text=WordPress from Azure Marketplace" alt-text="Screenshot of Create a WordPress site.":::
+
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Type `myResourceGroup` for the name and select a **Region** you want to serve your app from.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/04-wordpress-basics-project-details.png?text=Azure portal WordPress Project Details" alt-text="Screenshot of WordPress project details.":::
+
+1. Under **Instance details**, type a globally unique name for your web app and choose **Linux** for **Operating System**. For the purposes of this tutorial, select **Basic** for **Hosting plan**.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/05-wordpress-basics-instance-details.png?text=WordPress basics instance details" alt-text="Screenshot of WordPress instance details.":::
+
+ For app and database SKUs for each hosting plans, see the below table.
+
+ | **Hosting Plan** | **Web App** | **Database (MySQL Flexible Server)** |
+ ||||
+ |Basic (Hobby or Research purposes) | B1 (1 vCores, 1.75 GB RAM, 10 GB Storage) | Burstable, B1ms (1 vCores, 2 GB RAM, 32 GB Storage, 400 IOPs) |
+ |Standard (General Purpose production apps)| P1V2 (1 vCores, 3.5 GB RAM, 250 GB Storage)| General Purpose D2ds_v4 (2 vCores, 8 GB RAM, 128 GB Storage, 700 IOPs)|
+ |Premium (Heavy Workload production apps) | P1V3 (2 Cores, 8 GB RAM, 250 GB storage) | Business Critical, Standard_E4ds_v4 (2 vCores, 16 GB RAM, 256 GB storage, 1100 IOPS) |
+
+ For pricing, visit [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/) and [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/).
+
+1. <a name="wordpress-settings"></a>Under **WordPress Settings**, type an **Admin Email**, **Admin Username**, and **Admin Password**. The **Admin Email** here is used for WordPress administrative sign-in only.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/06-wordpress-basics-wordpress-settings.png?text=Azure portal WordPress settings" alt-text="Screenshot of WordPress settings.":::
+
+1. Select the **Advanced** tab. Under **Additional Settings** choose your preferred **Site Language** and **Content Distribution**. If you're unfamiliar with a [Content Delivery Network](../../cdn/cdn-overview.md) or [Blob Storage](../../storage/blobs/storage-blobs-overview.md), select **Disabled**. For more details on the Content Distribution options, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html).
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/08-wordpress-advanced-settings.png" alt-text="Screenshot of WordPress Advanced Settings.":::
+
+1. Select the **Review + create** tab. After validation runs, select the **Create** button at the bottom of the page to create the WordPress site.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/09-wordpress-create.png?text=WordPress create button" alt-text="Screenshot of WordPress create button.":::
+
+1. Browse to your site URL and verify the app is running properly. The site may take a few minutes to load. If you receive an error, allow a few more minutes then refresh the browser.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/wordpress-sample-site.png?text=WordPress sample site" alt-text="Screenshot of WordPress site.":::
+
+1. To access the WordPress Admin page, browse to `/wp-admin` and use the credentials you created in the [WordPress settings step](#wordpress-settings).
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/wordpress-admin-login.png?text=WordPress admin login" alt-text="Screenshot of WordPress admin login.":::
+
+> [!NOTE]
+> - [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74).
+> - The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
+>
+> If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
++
+## MySQL flexible server username and password
+
+- Database username and password of the MySQL Flexible Server are generated automatically. To retrieve these values after the deployment go to Application Settings section of the Configuration page in Azure App Service. The WordPress configuration is modified to use these [Application Settings](../../app-service/reference-app-settings.md#wordpress) to connect to the MySQL database.
+
+- To change the MySQL database password, see [Reset admin password](how-to-manage-server-portal.md#reset-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](../../app-service/reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](../../app-service/reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+
+## Manage the MySQL database
+
+The MySQL Flexible Server is created behind a private [Virtual Network](../../virtual-network/virtual-networks-overview.md) and can't be accessed directly. To access and manage the database, use phpMyAdmin that's deployed with the WordPress site.
+- Navigate to the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin
+- Login with the flexible server's username and password
+
+## Change WordPress admin password
+
+The [Application Settings](../../app-service/reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](../../app-service/reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+
+## Clean up resources
+
+When no longer needed, you can delete the resource group, App service, and all related resources.
+
+1. From your App Service *overview* page, click the *resource group* you created in the [Create WordPress site using Azure portal](#create-wordpress-site-using-azure-portal) step.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/resource-group.png" alt-text="Resource group in App Service overview page.":::
+
+1. From the *resource group* page, select **Delete resource group**. Confirm the name of the resource group to finish deleting the resources.
+
+ :::image type="content" source="./media/tutorial-wordpress-app-service/delete-resource-group.png" alt-text="Delete resource group.":::
+
+## Next steps
+
+Congratulations, you've successfully completed this quickstart!
+
+> [!div class="nextstepaction"]
+> [Tutorial: Map a custom domain name](../../app-service/app-service-web-tutorial-custom-domain.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: PHP app with MySQL](tutorial-php-database-app.md)
+
+> [!div class="nextstepaction"]
+> [How to manage your server](how-to-manage-server-cli.md)
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Azure Database for MySQL
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Azure Database for PostgreSQL
purview Catalog Private Link Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-faqs.md
Check out the answers to the following common questions.
Use a Managed IR if: - Your Microsoft Purview account is deployed in one of the [supported regions for Managed VNets](catalog-managed-vnet.md#supported-regions).-- You are plannig to scan any of the [supported data sources](catalog-managed-vnet.md#supported-data-sources) by Managed IR.
+- You are planning to scan any of the [supported data sources](catalog-managed-vnet.md#supported-data-sources) by Managed IR.
Use a self-hosted integration runtime if: - You are planning to scan any Azure IaaS, SaaS on-premises data sources.
purview How To Data Owner Policies Arc Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policies-arc-sql-server.md
Register each data source with Microsoft Purview to later define access policies
1. Enable Data Use Management. Data Use Management needs certain permissions and can affect the security of your data, as it delegates to certain Microsoft Purview roles to manage access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management] (./how-to-enable-data-use-management.md)
-1. Enter the **Application ID** from the App Registration related to this Arc-enabled SQL server.
+1. Upon enabling Data Use Management, Microsoft Purview will automatically capture the **Application ID** of the App Registration related to this Arc-enabled SQL server. Come back to this screen and hit the refresh button on the side of it to refresh, in case the association between the Arc-enabled SQL server and the App Registration changes in the future.
1. Select **Register** or **Apply** at the bottom
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
Parameters are case-sensitive. Which parameters you choose to use depends on wha
| Parameter name | Description | |--|-|
-| `uri` | (Required for [no authentication or key authentication](#WhatSkillParametersToUse)) The [scoring URI of the AML service](../machine-learning/how-to-consume-web-service.md) to which the _JSON_ payload will be sent. Only the **https** URI scheme is allowed. |
-| `key` | (Required for [key authentication](#WhatSkillParametersToUse)) The [key for the AML service](../machine-learning/how-to-consume-web-service.md#authentication-with-keys). |
+| `uri` | (Required for [no authentication or key authentication](#WhatSkillParametersToUse)) The [scoring URI of the AML service](../machine-learning/v1/how-to-consume-web-service.md) to which the _JSON_ payload will be sent. Only the **https** URI scheme is allowed. |
+| `key` | (Required for [key authentication](#WhatSkillParametersToUse)) The [key for the AML service](../machine-learning/v1/how-to-consume-web-service.md#authentication-with-keys). |
| `resourceId` | (Required for [token authentication](#WhatSkillParametersToUse)). The Azure Resource Manager resource ID of the AML service. It should be in the format subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.MachineLearningServices/workspaces/{workspace-name}/services/{service_name}. | | `region` | (Optional for [token authentication](#WhatSkillParametersToUse)). The [region](https://azure.microsoft.com/global-infrastructure/regions/) the AML service is deployed in. | | `timeout` | (Optional) When specified, indicates the timeout for the http client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. |
Parameters are case-sensitive. Which parameters you choose to use depends on wha
Which AML skill parameters are required depends on what authentication your AML service uses, if any. AML services provide three authentication options:
-* [Key-Based Authentication](../machine-learning/how-to-authenticate-web-service.md#key-based-authentication). A static key is provided to authenticate scoring requests from AML skills
+* [Key-Based Authentication](../machine-learning/v1/how-to-authenticate-web-service.md#key-based-authentication). A static key is provided to authenticate scoring requests from AML skills
* Use the _uri_ and _key_ parameters
-* [Token-Based Authentication](../machine-learning/how-to-authenticate-web-service.md#token-based-authentication). The AML service is [deployed using token based authentication](../machine-learning/how-to-authenticate-web-service.md#token-based-authentication). The Azure Cognitive Search service's [managed identity](../active-directory/managed-identities-azure-resources/overview.md) is granted the [Reader Role](../machine-learning/how-to-assign-roles.md) in the AML service's workspace. The AML skill then uses the Azure Cognitive Search service's managed identity to authenticate against the AML service, with no static keys required.
+* [Token-Based Authentication](../machine-learning/v1/how-to-authenticate-web-service.md#token-based-authentication). The AML service is [deployed using token based authentication](../machine-learning/v1/how-to-authenticate-web-service.md#token-based-authentication). The Azure Cognitive Search service's [managed identity](../active-directory/managed-identities-azure-resources/overview.md) is granted the [Reader Role](../machine-learning/how-to-assign-roles.md) in the AML service's workspace. The AML skill then uses the Azure Cognitive Search service's managed identity to authenticate against the AML service, with no static keys required.
* Use the _resourceId_ parameter. * If the Azure Cognitive Search service is in a different region from the AML workspace, use the _region_ parameter to set the region the AML service was deployed in * No Authentication. No authentication is required to use the AML service
For cases when the AML service is unavailable or returns an HTTP error, a friend
## See also + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [AML Service troubleshooting](../machine-learning/how-to-troubleshoot-deployment.md)
++ [AML Service troubleshooting](../machine-learning/v1/how-to-troubleshoot-deployment.md)
search Cognitive Search Tutorial Aml Designer Custom Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-aml-designer-custom-skill.md
Follow the [Regression - Automobile Price Prediction (Advanced)](https://github.
## Register model and download assets
-Once you have a model trained, [register the trained model](../machine-learning/how-to-deploy-model-designer.md) and follow the steps to download all the files in the `trained_model_outputs` folder or download only the `score.py` and `conda_env.yml` files from the models artifacts page. You will edit the scoring script before the model is deployed as a real-time inferencing endpoint.
+Once you have a model trained, [register the trained model](../machine-learning/v1/how-to-deploy-model-designer.md) and follow the steps to download all the files in the `trained_model_outputs` folder or download only the `score.py` and `conda_env.yml` files from the models artifacts page. You will edit the scoring script before the model is deployed as a real-time inferencing endpoint.
## Edit the scoring script for use with Cognitive Search
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-partners-knowledge-mining.md
Previously updated : 09/30/2021 Last updated : 08/15/2022 # Partner spotlight
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
Previously updated : 06/18/2021 Last updated : 08/15/2022 # Estimate and manage capacity of a search service Before you [create a search service](search-create-service-portal.md) and lock in a specific [pricing tier](search-sku-tier.md), take a few minutes to understand how capacity works and how you might adjust replicas and partitions to accommodate workload fluctuation.
-In Azure Cognitive Search, capacity is based on *replicas* and *partitions*. Replicas are copies of the search engine.
-Partitions are units of storage. Each new search service starts with one each, but you can scale up each resource independently to accommodate fluctuating workloads. Adding either resource is [billable](search-sku-manage-costs.md#billable-events).
+In Azure Cognitive Search, capacity is based on *replicas* and *partitions* that can be scaled to your workload. Replicas are copies of the search engine.
+Partitions are units of storage. Each new search service starts with one each, but you can adjust each unit independently to accommodate fluctuating workloads. Adding either unit is [billable](search-sku-manage-costs.md#billable-events).
The physical characteristics of replicas and partitions, such as processing speed and disk IO, vary by [service tier](search-sku-tier.md). If you provisioned on Standard, replicas and partitions will be faster and larger than those of Basic.
search Search Howto Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk.md
ms.devlang: csharp Previously updated : 04/26/2021 Last updated : 08/15/2022 # How to use Azure.Search.Documents in a C# .NET Application
-This article explains how to create and manage search objects using C# and the [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) (version 11) client library.
+This article explains how to create and manage search objects using C# and the [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) (version 11) client library in the Azure SDK for .NET.
## About version 11
-Azure SDK for .NET includes a [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/overview/azure/search/client10). Version 11 is more consistent in terms of Azure programmability. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
+Azure SDK for .NET includes an [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) client library from the Azure SDK team that is functionally equivalent to the previous client library, [Microsoft.Azure.Search](/dotnet/api/overview/azure/search/client10). Version 11 is more consistent in terms of Azure programmability. Some examples include [`AzureKeyCredential`](/dotnet/api/azure.azurekeycredential) key authentication, and [System.Text.Json.Serialization](/dotnet/api/system.text.json.serialization) for JSON serialization.
As with previous versions, you can use this library to:
If you have been using the previous version of the .NET SDK and you'd like to up
+ Visual Studio 2019 or later.
-+ Your own Azure Cognitive Search service. In order to use the SDK, you will need the name of your service and one or more API keys. [Create a service in the portal](search-create-service-portal.md) if you don't have one.
++ Your own Azure Cognitive Search service. In order to use the SDK, you'll need the name of your service and one or more API keys. [Create a service in the portal](search-create-service-portal.md) if you don't have one. + Download the [Azure.Search.Documents package](https://www.nuget.org/packages/Azure.Search.Documents) using **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution** in Visual Studio. Search for the package name `Azure.Search.Documents`.
private static SearchIndexClient CreateSearchIndexClient(IConfigurationRoot conf
} ```
-The next statement creates the search client used to load documents or run queries. `SearchClient` requires an index. You will need an admin API key to load documents, but you can use a query API key to run queries.
+The next statement creates the search client used to load documents or run queries. `SearchClient` requires an index. You'll need an admin API key to load documents, but you can use a query API key to run queries.
```csharp string indexName = configuration["SearchIndexName"];
private static void UploadDocuments(SearchClient searchClient)
Thread.Sleep(2000); ```
-This method has four parts. The first creates an array of 3 `Hotel` objects each with 3 `Room` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In an actual application, data will likely come from an external data source such as an SQL database.
+This method has four parts. The first creates an array of three `Hotel` objects each with three `Room` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In an actual application, data will likely come from an external data source such as an SQL database.
The second part creates an [`IndexDocumentsBatch`](/dotnet/api/azure.search.documents.models.indexdocumentsbatch) containing the documents. You specify the operation you want to apply to the batch at the time you create it, in this case by calling [`IndexDocumentsAction.Upload`](/dotnet/api/azure.search.documents.models.indexdocumentsaction.upload). The batch is then uploaded to the Azure Cognitive Search index by the [`IndexDocuments`](/dotnet/api/azure.search.documents.searchclient.indexdocuments) method.
The second part creates an [`IndexDocumentsBatch`](/dotnet/api/azure.search.docu
> In this example, we are just uploading documents. If you wanted to merge changes into existing documents or delete documents, you could create batches by calling `IndexDocumentsAction.Merge`, `IndexDocumentsAction.MergeOrUpload`, or `IndexDocumentsAction.Delete` instead. You can also mix different operations in a single batch by calling `IndexBatch.New`, which takes a collection of `IndexDocumentsAction` objects, each of which tells Azure Cognitive Search to perform a particular operation on a document. You can create each `IndexDocumentsAction` with its own operation by calling the corresponding method such as `IndexDocumentsAction.Merge`, `IndexAction.Upload`, and so on. >
-The third part of this method is a catch block that handles an important error case for indexing. If your search service fails to index some of the documents in the batch, a `RequestFailedException` is thrown. An exception can happen if you are indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements. An alternative is to use [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) for intelligent batching, automatic flushing, and retries for failed indexing actions. See [this example](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) for more context.
+The third part of this method is a catch block that handles an important error case for indexing. If your search service fails to index some of the documents in the batch, a `RequestFailedException` is thrown. An exception can happen if you're indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements. An alternative is to use [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) for intelligent batching, automatic flushing, and retries for failed indexing actions. See [this example](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) for more context.
Finally, the `UploadDocuments` method delays for two seconds. Indexing happens asynchronously in your search service, so the sample application needs to wait a short time to ensure that the documents are available for searching. Delays like this are typically only necessary in demos, tests, and sample applications.
Second, define a method that sends a query request.
Each time the method executes a query, it creates a new [`SearchOptions`](/dotnet/api/azure.search.documents.searchoptions) object. This object is used to specify additional options for the query such as sorting, filtering, paging, and faceting. In this method, we're setting the `Filter`, `Select`, and `OrderBy` property for different queries. For more information about the search query expression syntax, [Simple query syntax](/rest/api/searchservice/Simple-query-syntax-in-Azure-Search).
-The next step is to actually execute the search query. Running the search is done using the `SearchClient.Search` method. For each query, pass the search text to use as a string (or `"*"` if there is no search text), plus the search options created earlier. We also specify `Hotel` as the type parameter for `SearchClient.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`.
+The next step is query execution. Running the search is done using the `SearchClient.Search` method. For each query, pass the search text to use as a string (or `"*"` if there is no search text), plus the search options created earlier. We also specify `Hotel` as the type parameter for `SearchClient.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`.
```csharp private static void RunQueries(SearchClient searchClient)
private static void RunQueries(SearchClient searchClient)
options = new SearchOptions(); options.SearchFields.Add("HotelName");
- //Adding details to select, because "Location" is not supported yet when deserialize search result to "Hotel"
+ //Adding details to select, because "Location" isn't supported yet when deserializing search result to "Hotel"
options.Select.Add("HotelId"); options.Select.Add("HotelName"); options.Select.Add("Description");
search Search Query Lucene Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-lucene-examples.md
Previously updated : 03/03/2021 Last updated : 08/15/2022 # Use the "full" Lucene search syntax (advanced queries in Azure Cognitive Search)
When constructing queries for Azure Cognitive Search, you can replace the defaul
The Lucene parser supports complex query formats, such as field-scoped queries, fuzzy search, infix and suffix wildcard search, proximity search, term boosting, and regular expression search. The additional power comes with additional processing requirements so you should expect a slightly longer execution time. In this article, you can step through examples demonstrating query operations based on full syntax.
-> [!Note]
+> [!NOTE]
> Many of the specialized query constructions enabled through the full Lucene query syntax are not [text-analyzed](search-lucene-query-architecture.md#stage-2-lexical-analysis), which can be surprising if you expect stemming or lemmatization. Lexical analysis is only performed on complete terms (a term query or phrase query). Query types with incomplete terms (prefix query, wildcard query, regex query, fuzzy query) are added directly to the query tree, bypassing the analysis stage. The only transformation performed on partial query terms is lowercasing. >
search Search Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-overview.md
Previously updated : 03/03/2021 Last updated : 08/15/2022 # Querying in Azure Cognitive Search Azure Cognitive Search offers a rich query language to support a broad range of scenarios, from free text search, to highly-specified query patterns. This article describes query requests, and what kinds of queries you can create.
-In Cognitive Search, a query is a full specification of a round-trip **`search`** operation, with parameters that both inform query execution and shape the response coming back. Parameters and parsers determine the type of query request. The following query example is a free text query with a boolean operator, using the [Search Documents (REST API)](/rest/api/searchservice/search-documents), targeting the [hotels-sample-index](search-get-started-portal.md) documents collection.
+In Cognitive Search, a query is a full specification of a round-trip **`search`** operation, with parameters that both inform query execution and shape the response coming back. The following query example calls the [Search Documents (REST API)](/rest/api/searchservice/search-documents). It's a parameterized, free text query with a boolean operator, targeting the [hotels-sample-index](search-get-started-portal.md) documents collection.
```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
Parameters used during query execution include:
-+ **`queryType`** sets the parser, which is either the [default simple query parser](search-query-simple-examples.md) (optimal for full text search), or the [full Lucene query parser](search-query-lucene-examples.md) used for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search, to name a few.
++ **`queryType`** sets the parser, which is usually the [default simple query parser](search-query-simple-examples.md) (optimal for full text search). You could also set it to the [full Lucene query parser](search-query-lucene-examples.md) for advanced query constructs like regular expressions, proximity search, fuzzy and wildcard search. Or, you could set it to [semantic search](semantic-search-overview.md) if you want advanced semantic modeling on the query response.
-+ **`searchMode`** specifies whether matches are based on "all" criteria or "any" criteria in the expression. The default is any.
++ **`searchMode`** specifies whether matches are based on "all" criteria or "any" criteria in the expression. The default is "any". + **`search`** provides the match criteria, usually whole terms or phrases, with or without operators. Any field that is attributed as *searchable* in the index schema is a candidate for this parameter.
search Search Query Simple Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-simple-examples.md
Previously updated : 03/03/2021 Last updated : 08/15/2022 # Use the "simple" search syntax in Azure Cognitive Search
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Previously updated : 01/26/2022 Last updated : 08/15/2022 # Use API keys for Azure Cognitive Search authentication
-Cognitive Search uses key-based authentication as its primary authentication methodology. For inbound requests to a search service endpoint, such as requests that create or query an index, API keys are the only generally available authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
+Cognitive Search offers key-based authentication as its primary authentication methodology. For inbound requests to a search service endpoint, such as requests that create or query an index, API keys are the only generally available authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
> [!NOTE]
-> [Authorization for data plane operations](search-security-rbac.md) using Azure role-based access control (RBAC) is now in preview. You can use this preview capability to supplement or replace API keys on search index requests.
+> [Azure role-based access control (RBAC)](search-security-rbac.md) for inbound requests to a search endpoint is now in preview. You can use this preview capability to supplement or replace API keys on search index requests.
## Using API keys in search
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
If you want to _retain data_ for more than two years or _reduce the retention co
Want more in-depth information? View the ["Improving the breadth and coverage of threat hunting with ADX support, more entity types, and updated MITRE integration"](https://www.youtube.com/watch?v=5coYjlw2Qqs&ab_channel=MicrosoftSecurityCommunity) webinar.
-If you prefer another long-term retention solution, see [Export from Microsoft Sentinel / Log Analytics workspace to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export.md) or [Move logs to long-term storage by using Azure Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The advantage of using Logic Apps is that it can export historical data.
+If you prefer another long-term retention solution, see [Export from Microsoft Sentinel / Log Analytics workspace to Azure Storage and Event Hubs](/cli/azure/monitor/log-analytics/workspace/data-export) or [Move logs to long-term storage by using Azure Logic Apps](../azure-monitor/logs/logs-export-logic-app.md). The advantage of using Logic Apps is that it can export historical data.
Finally, you can set fine-grained retention periods by using [table-level retention settings](https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/azure-log-analytics-data-retention-by-type-in-real-life/ba-p/1416287). For more information, see [Configure data retention and archive policies in Azure Monitor Logs (Preview)](../azure-monitor/logs/data-retention-archive.md).
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 08/08/2022 Last updated : 08/16/2022 # Azure Policy built-in definitions for Azure Service Fabric
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Authenticated Proxy | Not supported | If the VM is using an authenticated proxy
VPN site-to-site connection to on-premises<br/><br/>(with or without ExpressRoute)| Supported | Ensure that the UDRs and NSGs are configured in such a way that the Site Recovery traffic is not routed to on-premises. [Learn more](./azure-to-azure-about-networking.md) VNET to VNET connection | Supported | [Learn more](./azure-to-azure-about-networking.md) Virtual Network Service Endpoints | Supported | If you are restricting the virtual network access to storage accounts, ensure that the trusted Microsoft services are allowed access to the storage account.
-Accelerated networking | Supported | Accelerated networking must be enabled on source VM. [Learn more](azure-vm-disaster-recovery-with-accelerated-networking.md).
+Accelerated networking | Supported | Accelerated networking can be enabled on the recovery VM only if it is enabled on the source VM also. [Learn more](azure-vm-disaster-recovery-with-accelerated-networking.md).
Palo Alto Network Appliance | Not supported | With third-party appliances, there are often restrictions imposed by the provider inside the Virtual Machine. Azure Site Recovery needs agent, extensions, and outbound connectivity to be available. But the appliance does not let any outbound activity to be configured inside the Virtual Machine. IPv6 | Not supported | Mixed configurations that include both IPv4 and IPv6 are also not supported. Free up the subnet of the IPv6 range before any Site Recovery operation. Private link access to Site Recovery service | Supported | [Learn more](azure-to-azure-how-to-enable-replication-private-endpoints.md)
spring-apps How To Access App From Internet Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-access-app-from-internet-virtual-network.md
+
+ Title: Expose applications on Azure Spring Apps to the internet from a public network
+description: Describes how to expose applications on Azure Spring Apps to the internet from a public network.
++++ Last updated : 08/09/2022+
+ms.devlang: azurecli
++
+# Expose applications on Azure Spring Apps to the internet from a public network
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+This article describes how to expose applications on Azure Spring Apps to the internet from a public network.
+
+You can expose applications to the internet with TLS Termination or end-to-end TLS using Application Gateway. These approaches are described in [Expose applications to the internet with TLS Termination at Application Gateway](./expose-apps-gateway-tls-termination.md) and [Expose applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md). These approaches work well, but Application Gateway can involve a complicated setup and extra expense.
+
+If you don't want to use Application Gateway for advanced operations, you can expose your applications to the internet with one click using the Azure portal or one command using the Azure CLI. The only extra expense is a standard public IP for one Azure Spring Apps service instance, regardless of how many apps you want to expose.
+
+## Prerequisites
+
+- An Azure Spring Apps service instance deployed in a virtual network and an app created in it. For more information, see [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md).
+
+## Assign a public fully qualified domain name (FQDN) for your application in a VNet injection instance
++
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to assign a public FQDN for your application.
+
+1. Select the Azure Spring Apps service instance deployed in your virtual network, and then open the **Apps** tab in the menu on the left.
+
+1. Select the application to show the **Overview** page.
+
+1. Select **Assign Public Endpoint** to assign a public FQDN to your application. Assigning an FQDN can take a few minutes.
+
+ :::image type="content" source="media/how-to-access-app-from-internet-virtual-network/assign-public-endpoint.png" alt-text="Screenshot of Azure portal showing how to assign a public FQDN to your application." lightbox="media/how-to-access-app-from-internet-virtual-network/assign-public-endpoint.png":::
+
+The assigned public FQDN (labeled **URL**) is now available. It can only be accessed within the public network.
+
+### [Azure CLI](#tab/azure-CLI)
+
+Use the following command to assign a public endpoint to your app. Be sure to replace the placeholders with your actual values.
+
+```azurecli
+az spring app update \
+ --resource-group <resource-group-name> \
+ --name <app-name> \
+ --service <service-instance-name> \
+ --assign-public-endpoint true
+```
+++
+## Use a public URL to access your application from both inside and outside the virtual network
+
+You can use a public URL to access your application both inside and outside the virtual network. Follow the steps in [Access your application in a private network](./access-app-virtual-network.md) to bind the domain `.private.azuremicroservices.io` to the service runtime Subnet private IP address in your private DNS zone while keeping the **Assign Endpoint** in a disable state. You can then access the app using the **public URL** from both inside and outside the virtual network.
+
+## Secure traffic to the public endpoint
+
+To ensure the security of your applications when you expose a public endpoint for them, secure the endpoint by filtering network traffic to your service with a network security group. For more information, see [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md). A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
+
+> [!NOTE]
+> If you couldn't access your application in VNet injection instance from internet after you have assigned a public FQDN, check your network security group first to see whether you have allowed such inbound traffic.
+
+## Next steps
+
+- [Expose applications with end-to-end TLS in a virtual network](./expose-apps-gateway-end-to-end-tls.md)
+- [Troubleshooting Azure Spring Apps in virtual networks](./troubleshooting-vnet.md)
+- [Customer responsibilities for running Azure Spring Apps in VNET](./vnet-customer-responsibilities.md)
spring-apps How To Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-log-streaming.md
Title: Stream Azure Spring Apps app logs in real-time
-description: How to use log streaming to view application logs instantly
+ Title: Stream Azure Spring Apps application console logs in real time
+description: Describes how to use log streaming to view application logs in real time
Previously updated : 01/14/2019 Last updated : 08/10/2022
-# Stream Azure Spring Apps app logs in real-time
+# Stream Azure Spring Apps application console logs in real time
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-Azure Spring Apps enables log streaming in Azure CLI to get real-time application console logs for troubleshooting. You can also [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+This article describes how to enable log streaming in Azure CLI to get real-time application console logs for troubleshooting. You can also use diagnostics settings to analyze diagnostics data in Azure Spring Apps. For more information, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
## Prerequisites
-* [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, minimum version 1.0.0. You can install the extension by using the following command: `az extension add --name spring`
-* An instance of **Azure Spring Apps** with a running application. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, minimum version 1.0.0. You can install the extension by using the following command: `az extension add --name spring`
+- An instance of Azure Spring Apps with a running application. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
-## Use CLI to tail logs
+## Use Azure CLI to produce tail logs
-To avoid repeatedly specifying your resource group and service instance name, set your default resource group name and cluster name.
+This section provides examples of using Azure CLI to produce tail logs. To avoid repeatedly specifying your resource group and service instance name, use the following commands to set your default resource group name and cluster name:
```azurecli
-az config set defaults.group=<service group name>
-az config set defaults.spring-cloud=<service instance name>
+az config set defaults.group=<service-group-name>
+az config set defaults.spring-cloud=<service-instance-name>
```
-In following examples, the resource group and service name will be omitted in the commands.
+The resource group and service name are omitted in the following examples.
-### Tail log for app with single instance
+### View the tail log for an app with a single instance
-If an app named auth-service has only one instance, you can view the log of the app instance with the following command:
+If an app named `auth-service` has only one instance, you can view the log of the app instance with the following command:
```azurecli
-az spring app logs --name <application name>
+az spring app logs --name <application-name>
```
-This will return logs similar to the following examples, where `auth-service` is the application name.
+This command returns logs similar to the following examples, where `auth-service` is the application name.
```output ...
This will return logs similar to the following examples, where `auth-service` is
... ```
-### Tail log for app with multiple instances
+### View the tail log for an app with multiple instances
If multiple instances exist for the app named `auth-service`, you can view the instance log by using the `-i/--instance` option.
-First, you can get the app instance names with following command.
+First, run the following command to get the app instance names:
```azurecli az spring app show --name auth-service --query properties.activeDeployment.properties.instances --output table
auth-service-default-12-75cc4577fc-8nt4m Running UP
auth-service-default-12-75cc4577fc-n25mh Running UP ```
-Then, you can stream logs of an app instance with the option `-i/--instance` option:
+Then, you can stream logs of an app instance using the `-i/--instance` option, as follows:
```azurecli az spring app logs --name auth-service --instance auth-service-default-12-75cc4577fc-pw7hb ```
-You can also get details of app instances from the Azure portal. After selecting **Apps** in the left navigation pane of your Azure Spring Apps service, select **App Instances**.
+You can also get details of app instances from the Azure portal. After selecting **Apps** in the left navigation pane of your Azure Spring Apps service, select **App Instances**.
### Continuously stream new logs
-By default, `az spring app logs` prints only existing logs streamed to the app console and then exits. If you want to stream new logs, add `-f/--follow`:
+By default, `az spring app logs` prints only existing logs streamed to the app console, and then exits. If you want to stream new logs, add the `-f/--follow` argument:
```azurecli az spring app logs --name auth-service --follow ```
-When you use `--follow` to tail instant logs, the Azure Spring Apps log streaming service will send heartbeat logs to the client every minute unless your application is writing logs constantly. These heartbeat log messages look like `2020-01-15 04:27:13.473: No log from server`.
+When you use the `--follow` argument to tail instant logs, the Azure Spring Apps log streaming service sends heartbeat logs to the client every minute unless your application is writing logs constantly. Heartbeat log messages use the following format: `2020-01-15 04:27:13.473: No log from server`.
-To check all the logging options supported:
+Use the following command to check all the logging options that are supported:
```azurecli az spring app logs --help
az spring app logs --help
### Format JSON structured logs > [!NOTE]
-> Requires spring extension version 2.4.0 or later.
+> Formatting JSON structured logs requires spring extension version 2.4.0 or later.
-When the [Structured application log](./structured-app-log.md) is enabled for the app, the logs are printed in JSON format. This makes it difficult to read. The `--format-json` argument can be used to format the JSON logs into human readable format.
+Structured application logs are displayed in JSON format, which can be difficult to read. You can use the `--format-json` argument to format logs in JSON format into a more readable format. For more information, see [Structured application log for Azure Spring Apps](./structured-app-log.md).
+
+The following example shows how to use the `--format-json` argument:
```azurecli # Raw JSON log
$ az spring app logs --name auth-service --format-json
2021-05-26T03:35:27.533Z INFO [ main] com.netflix.discovery.DiscoveryClient : Single vip registry refresh property : null ```
-The `--format-json` argument also takes optional customized format, using the keyword argument [format string syntax](https://docs.python.org/3/library/string.html#format-string-syntax).
+The `--format-json` argument also accepts an optional customized format using format string syntax. For more information, see [Format String Syntax](https://docs.python.org/3/library/string.html#format-string-syntax).
+
+The following example shows how to use format string syntax:
```azurecli # Custom format
Single vip registry refresh property : null
> {timestamp} {level:>5} [{thread:>15.15}] {logger{39}:<40.40}: {message}{n}{stackTrace} > ```
+## Stream an Azure Spring Apps app log in a VNet injection instance
+
+For an Azure Spring Apps instance deployed in a custom virtual network, you can access log streaming by default from a private network. For more information, see [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md)
+
+Azure Spring Apps also enables you to access real-time app logs from a public network using Azure portal or the Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+Use the following steps to enable a log streaming endpoint on the public network.
+
+1. Select the Azure Spring Apps service instance deployed in your virtual network, and then open the **Networking** tab in the navigation menu.
+
+1. Select the **Vnet injection** page.
+
+1. Switch the status of **Log streaming on public network** to **enable** to enable a log streaming endpoint on the public network. This process will take a few minutes.
+
+ :::image type="content" source="media/how-to-log-streaming/enable-logstream-public-endpoint.png" alt-text="Screenshot of enabling a log stream public endpoint on the Vnet Injection page." lightbox="media/how-to-log-streaming/enable-logstream-public-endpoint.png":::
+
+#### [CLI](#tab/azure-CLI)
+
+Use the following command to enable the log stream public endpoint.
+
+```azurecli
+az spring update \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --enable-log-stream-public-endpoint true
+```
+
+After you've enabled the log stream public endpoint, you can access the app log from a public network as you would access a normal instance.
+++
+## Secure traffic to the log streaming public endpoint
+
+Log streaming uses the same key as the test endpoint described in [Set up a staging environment in Azure Spring Apps](./how-to-staging-environment.md) to authenticate the connections to your deployments. As a result, only users who have read access to the test keys can access log streaming.
+
+To ensure the security of your applications when you expose a public endpoint for them, secure the endpoint by filtering network traffic to your service with a network security group. For more information, see [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md). A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
+
+> [!NOTE]
+> If you can't access app logs in the VNet injection instance from the internet after you've enabled a log stream public endpoint, check your network security group to see whether you've allowed such inbound traffic.
+ ## Next steps * [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
spring-apps Secure Communications End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md
+
+ Title: Secure communications end-to-end for Spring Boot apps in a Zero Trust environment
+
+description: Describes how to secure communications end-to-end or terminate transport-level security at any communication point for Spring Boot apps.
+++ Last updated : 08/15/2022++++
+# Secure communications end-to-end for Spring Boot apps in a Zero Trust environment
+
+This article describes how to secure communications end-to-end for Spring Boot apps in a Zero Trust environment. You can secure communications end-to-end or terminate transport-level security at any communication point for Spring Boot apps. You can also automate the provisioning and configuration for all the Azure resources needed for securing communications.
+
+Implementing secure communications as part of your solution architecture can be challenging. Many customers manually rotate their certificates or create their own solutions to automate provisioning and configuration. Even then, there's still data exfiltration risk, such as unauthorized copying or transfer of data from server systems. With Azure Spring Apps, these details are handled for you. Azure Spring Apps abstracts away most of the complexity, leaving secure communications as configurable and automatable options in the service.
+
+## Secure internet communications
+
+The TLS/SSL protocol establishes identity and trust, and encrypts communications of all types. TLS/SSL makes secure communications possible, particularly web traffic carrying commerce and customer data.
+
+You can use any type of TLS/SSL certificate. For example, you can use certificates issued by a certificate authority, extended validation certificates, wildcard certificates with support for any number of subdomains, or self-signed certificates for dev and testing environments.
+
+## Load certificates security with Zero Trust
+
+Zero Trust is based on the principle of "never trust, always verify, and credential-free". Zero Trust helps to secure all communications by eliminating unknown and unmanaged certificates. Zero Trust involves trusting only certificates that are shared by verifying identity prior to granting access to those certificates. For more information, see the [Zero Trust Guidance Center](/security/zero-trust/).
+
+To securely load certificates from [Azure Key Vault](../key-vault/index.yml), Spring Boot apps use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) and [Azure role-based access control (RBAC)](../role-based-access-control/index.yml). Azure Spring Apps uses a provider [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) and Azure role-based access control. This secure loading is powered using the Azure Key Vault Java Cryptography Architecture (JCA) Provider. For more information, see [Azure Key Vault JCA client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/keyvault/azure-security-keyvault-jca).
+
+With Azure Key Vault, you control the storage and distribution of certificates to reduce accidental leakage. Applications and services can securely access certificates. Key Vault uses Azure role-based access control to lock down access to only those requiring access, such as an admin, but also apps, using the principle of least privilege. Applications and services authenticate and authorize, using Azure Active Directory and Azure role-based access control, to access certificates. You can monitor the access and use of certificates in Key Vault through its full audit trail.
+
+## Secure communications end-to-end or terminate TLS at any point
+
+As illustrated in the diagram below, there are several segments of communications through the following components:
+
+- Network access points such as Azure Front Door
+- Azure App Gateway
+- F5 BIG-IP Local Traffic Manager
+- Azure API Management
+- Apigee API Management Spring Boot apps and Backend systems such as databases, messaging and eventing systems, and app cache.
+
+You can secure communications end-to-end or terminate transport-level security at any communication point for Spring Boot apps.
++
+The following sections describe this architecture in more detail.
+
+### Segment 1: Secure communications into Azure Spring Apps
+
+The first segment (segment 1 in the diagram) represents communications from consumers to the ingress controller in Azure Spring Apps. These consumers include browsers, mobile phones, desktops, kiosks, or network access points like Azure Front Door, Azure App Gateway, F5 BIG-IP Local Traffic Manager, Azure API Management, and Apigee API Management.
+
+By default, this segment is secured using a Microsoft-supplied TLS/SSL certificate for the `*.azuremicroservices.io` domain. You can apply your own TLS/SSL certificate in Azure Key Vault by binding a custom domain to your app in Azure Spring Apps. No code is necessary. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](tutorial-custom-domain.md).
+
+### Segment 2: Secure communications from ingress controller to apps
+
+The next segment (segment 2 in the diagram) represents communications from the Azure Spring Apps ingress controller to any app on Azure Spring Apps. You can enable TLS/SSL to secure traffic from the ingress controller to an app that supports HTTPS. For more information, see [Enable ingress-to-app TLS for an application](how-to-enable-ingress-to-app-tls.md).
+
+A Spring Boot app can use Spring's approach to enable HTTPS, or the app can secure communications by using the Azure Key Vault Certificates Spring Boot Starter. For more information, see [Tutorial: Secure Spring Boot apps using Azure Key Vault certificates](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-key-vault-certificates).
+
+You need the following three configuration steps to secure communications using a TLS/SSL certificate from an Azure Key Vault. No code is necessary.
+
+1. Include the following Azure Key Vault Certificates Spring Boot Starter dependency in your *pom.xml* file:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-boot-starter-keyvault-certificates</artifactId>
+ </dependency>
+ ```
+
+1. Add the following properties to configure an app to load a TLS/SSL certificate from Azure Key Vault. Be sure to specify the URI of the Azure Key Vault and the certificate name.
+
+ ```properties
+ azure:
+ keyvault:
+ uri: ${KEY_VAULT_URI}
+
+ server:
+ ssl:
+ key-alias: ${SERVER_SSL_CERTIFICATE_NAME}
+ key-store-type: AzureKeyVault
+ ```
+
+1. Enable the app's managed identity, and then grant the managed identity with "Get" and "List" access to the Azure Key Vault. For more information, see [Enable system-assigned managed identity for an application in Azure Spring Apps](how-to-enable-system-assigned-managed-identity.md) and [Certificate Access Control](../key-vault/certificates/certificate-access-control.md).
+
+### Segment 3: Secure communications from app to managed middleware
+
+The next segment (segment 3 in the diagram) represents communications from any app to the managed Spring Cloud Config Server and Spring Cloud Service Registry in Azure Spring Apps. By default, this segment is secured using a Microsoft-supplied TLS/SSL certificate.
+
+### Segment 4: Secure app to app communications
+
+The next segment (segment 4 in the diagram) represents communications between an app to another app in Azure Spring Apps. You can use the Azure Key Vault Certificates Spring Boot Starter to configure the caller app to trust the TLS/SSL certificate supplied by an HTTPS-enabled called app. The receiver Spring Boot app can use Spring's approach to enable HTTPS, or the app can secure communications by using the Azure Key Vault Certificates Spring Boot Starter. For more information, see [Tutorial: Secure Spring Boot apps using Azure Key Vault certificates](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-key-vault-certificates).
+
+### Segment 5: Secure app to external system communications
+
+The next segment (segment 5 in the diagram) represents communications between an app running in Azure Spring Apps and external systems. You can use the Azure Key Vault Certificates Spring Boot Starter to configure the app running in Azure Spring Apps to trust the TLS/SSL certificate supplied by any external systems. For more information, see [Tutorial: Secure Spring Boot apps using Azure Key Vault certificates](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-key-vault-certificates).
+
+### Implicitly load TLS/SSL certificates from Key Vault into an app
+
+If your Spring code, Java code, or open-source libraries, such as OpenSSL, rely on the JVM default JCA chain to implicitly load certificates into the JVM's trust store, then you can import your TLS/SSL certificates from Key Vault into Azure Spring Apps and use those certificates within the app. For more information, see [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md).
+
+### Upload well known public TLS/SSL certificates for backend systems
+
+For an app to communicate to backend services in the cloud or in on-premises systems, it may require the use of public TLS/SSL certificates to secure communication. You can upload those TLS/SSL certificates for securing outbound communications. For more information, see [Use TLS/SSL certificates in your application in Azure Spring Apps](how-to-use-tls-certificate.md).
+
+### Automate provisioning and configuration for securing communications
+
+Using an ARM Template, Bicep, or Terraform, you can automate the provisioning and configuration of all the Azure resources mentioned above for securing communications.
+
+## Build your solutions and secure communications
+
+Azure Spring Apps is a fully managed service for Spring Boot applications. Azure Spring Apps abstracts away the complexity of infrastructure and Spring Cloud middleware management from users. You can focus on building your business logic and let Azure take care of dynamic scaling, patches, security, compliance, and high availability. With a few steps, you can provision Azure Spring Apps, create applications, deploy, and scale Spring Boot applications, and start securing communications in minutes.
+
+Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMware.
+
+## Next steps
+
+- [Deploy Spring microservices to Azure](/learn/modules/azure-spring-cloud-workshop/)
+- [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml)
+- [Azure Spring Apps reference architecture](reference-architecture.md)
+- Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-cloud), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-cloud) applications to Azure Spring Apps
storage Soft Delete Blob Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-manage.md
blockBlob.StartCopy(copySource);
#### Restore soft-deleted blobs when versioning is enabled
-To restore a soft-deleted blob when versioning is enabled, copy a previous version over the base blob with a [Copy Blob](/rest/api/storageservices/copy-blob) or [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) operation.
+To restore a soft-deleted blob when versioning is enabled, copy a previous version over the base blob by using the [Copy Blob](/rest/api/storageservices/copy-blob) or [Copy Blob From URL](/rest/api/storageservices/copy-blob-from-url) operation.
##### [.NET v12 SDK](#tab/dotnet)
When blobs or directories are soft-deleted, they are invisible in the Azure port
> ![Screenshot showing how to list soft-deleted blobs in Azure portal (hierarchical namespace enabled accounts).](media/soft-delete-blob-manage/soft-deleted-blobs-list-portal-hns.png) > [!NOTE]
-> If you rename a directory that contains soft deleted items (subdirectories and blobs), those soft deleted items become disconnected from the directory, so they won't appear in the Azure portal when you toggle the **Show deleted blobs** setting. If you want to view them in the Azure portal, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name.
+> If you rename a directory that contains soft deleted items (subdirectories and blobs), those soft deleted items become disconnected from the directory, so they won't appear in the Azure portal when you toggle the **Show deleted blobs** setting. If you want to view them in the Azure portal, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name.
Next, select the deleted directory or blob from the list display its properties. Under the **Overview** tab, notice that the status is set to **Deleted**. The portal also displays the number of days until the blob is permanently deleted.
To restore a soft-deleted blob or directory in the Azure portal, first display t
$deletedItems | Restore-AzDataLakeGen2DeletedItem ```
- If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
+ If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
### Restore soft deleted blobs and directories by using Azure CLI
To restore a soft-deleted blob or directory in the Azure portal, first display t
az storage fs undelete-path -f $filesystemName --deleted-path-name $dirName --deletion-id "<deletionId>" --auth-mode login ```
- If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
+ If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
### Restore soft deleted blobs and directories by using .NET
To restore a soft-deleted blob or directory in the Azure portal, first display t
```
- If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
+ If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
### Restore soft deleted blobs and directories by using Java
To restore a soft-deleted blob or directory in the Azure portal, first display t
```
- If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
+ If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
### Restore soft deleted blobs and directories by using Python
To restore a soft-deleted blob or directory in the Azure portal, first display t
```
- If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to it's original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
+ If you rename the directory that contains the soft deleted items, those items become disconnected from the directory. If you want to restore those items, you'll have to revert the name of the directory back to its original name or create a separate directory that uses the original directory name. Otherwise, you'll receive an error when you attempt to restore those soft deleted items.
## Next steps
storage Storage Blob Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md
public static async Task RestoreSnapshots(BlobContainerClient container, BlobCli
{ Snapshot = blobItems .OrderByDescending(snapshot => snapshot.Snapshot)
- .ElementAtOrDefault(1)?.Snapshot
+ .ElementAtOrDefault(0)?.Snapshot
}; // Restore the most recent snapshot by copying it to the blob.
public static void RestoreBlobsWithVersioning(BlobContainerClient container, Blo
{ VersionId = blobItems .OrderByDescending(version => version.VersionId)
- .ElementAtOrDefault(1)?.VersionId
+ .ElementAtOrDefault(0)?.VersionId
}; // Restore the most recently generated version by copying it to the base blob.
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 04/08/2022 Last updated : 08/16/2022
az storage account update -n <storage-account-name> -g <resource-group-name> --e
By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these steps:
-As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), open the Azure Cloud Shell.
-
-Execute the following commands:
+As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), execute the following Azure PowerShell commands. If using Azure Cloud Shell, be sure to run the `Connect-AzureAD` cmdlet first.
```azurepowershell # 1. Find the service account in your managed domain that represents the storage account.
storage Storage Dotnet How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-dotnet-how-to-use-queues.md
The [`QueueClient`](/dotnet/api/azure.storage.queues.queueclient) class enables
:::code language="csharp" source="~/azure-storage-snippets/queues/howto/dotnet/dotnet-v12/QueueBasics.cs" id="snippet_CreateClient":::
+> [!TIP]
+> Messages that you send by using the [`QueueClient`](/dotnet/api/azure.storage.queues.queueclient) class, must be in a format that can be included in an XML request with UTF-8 encoding. Optionally, you can set the [MessageEncoding](/dotnet/api/azure.storage.queues.queueclientoptions.messageencoding) option to [Base64](/dotnet/api/azure.storage.queues.queuemessageencoding) to handle non-compliant messages.
+ # [.NET v11 SDK](#tab/dotnetv11) The [`CloudQueueClient`](/dotnet/api/microsoft.azure.storage.queue.cloudqueueclient?view=azure-dotnet-legacy&preserve-view=true) class enables you to retrieve queues stored in Queue Storage. Here's one way to create the service client:
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
Complete the following steps before you add a machine learning model as a functi
1. Use Azure Machine Learning to [deploy your model as a web service](../machine-learning/how-to-deploy-managed-online-endpoints.md).
-2. Your machine learning endpoint must have an associated [swagger](../machine-learning/how-to-deploy-advanced-entry-script.md) that helps Stream Analytics understand the schema of the input and output. You can use this [sample swagger definition](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/AzureML/asa-mlswagger.json) as a reference to ensure you have set it up correctly.
+2. Your machine learning endpoint must have an associated [swagger](../machine-learning/v1/how-to-deploy-advanced-entry-script.md) that helps Stream Analytics understand the schema of the input and output. You can use this [sample swagger definition](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/AzureML/asa-mlswagger.json) as a reference to ensure you have set it up correctly.
3. Make sure your web service accepts and returns JSON serialized data. 4. Deploy your model on [Azure Kubernetes Service](../machine-learning/how-to-deploy-managed-online-endpoints.md#use-different-cpu-and-gpu-instance-types) for high-scale production deployments. If the web service is not able to handle the number of requests coming from your job, the performance of your Stream Analytics job will be degraded, which impacts latency. Models deployed on Azure Container Instances are supported only when you use the Azure portal.- ## Add a machine learning model to your job You can add Azure Machine Learning functions to your Stream Analytics job directly from the Azure portal or Visual Studio Code.
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
synapse-analytics Overview Database Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/overview-database-templates.md
Title: Overview of Azure Synapse database templates
description: Learn about database templates +
Last updated 11/02/2021
-# What is Azure Synapse database templates
+# What are Azure Synapse database templates?
-Data takes many forms as it moves from source systems to data warehouses and data marts with the intent to solve business problems. Database templates can help with the transformation of data into insights. Database templates are a set of business and technical data definitions that are pre-designed to meet the needs of a particular industry. They act as blueprints that provide common elements derived from best practices, government regulations, and the complex data and analytic needs of an industry-specific organization.
+Data takes many forms as it moves from source systems to data warehouses and data marts with the intent to solve business problems. Database templates can help with the transformation of data into insights.
-These information blueprints can be used by organizations to plan, architect, and design data solutions for data governance, reporting, business intelligence, and advanced analytics. The data models provide integrated business-wide information architectures that can help you implement, in a timely and predictable way, a proven industry data architecture.
+Database templates are a set of business and technical data definitions that are pre-designed to meet the needs of a particular industry. They act as blueprints that provide common elements derived from best practices, government regulations, and the complex data and analytic needs of an industry-specific organization.
-For example, if you're building a product recommendation solution for your retail customers, you'll need a basic blue-print to understand what the customer purchased and the transaction that led to the purchase. You may also need information about the store where the purchase was made. You also need to understand whether the customer is part of a loyalty program. Just to accomplish this use case we need the following schemas:
+These schema blueprints can be used by organizations to plan, architect, and design data solutions for data governance, reporting, business intelligence, and advanced analytics. The data models provide integrated business-wide information architectures that can help you implement, in a timely and predictable way, a proven industry data architecture.
+
+For example, if you're building a product recommendation solution for your retail customers, you'll need to understand what the customer purchased and the transaction that led to the purchase. You may also need information about the store where the purchase was made, and whether the customer is part of a loyalty program. Just to accomplish this use case, consider the following schemas:
* Product * Transaction
For example, if you're building a product recommendation solution for your retai
* CustomerLoyalty * Store
-You can set up this use case by selecting the six tables in the retail database template.
+You can set up this use case by selecting the six tables in the **Retail** database template.
![image](https://user-images.githubusercontent.com/84302758/140736847-9d93436d-47b4-4175-8b09-a33de0bcde05.png)
Currently, you can choose from the following database templates in Azure Synapse
* **Retail** - For sellers of consumer goods or services to customers through multiple channels. * **Utilities**ΓÇè-ΓÇèFor gas, electric, and water utilities; power generators; and water desalinators.
-As emission and carbon management is an important discussion in all industries, we've included those components in all the available database templates. These components make it easy for companies who need to track and report their direct and indirect greenhouse gas emissions.
+As emission and carbon management is an important discussion in all industries, so we've included those components in all the available database templates. These components make it easy for companies who need to track and report their direct and indirect greenhouse gas emissions.
## Next steps
synapse-analytics Tutorial Score Model Predict Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
Last updated 11/02/2021--++
synapse-analytics Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Denodo](./media/data-integration/denodo_logo.png) |**Denodo**<br>Denodo provide real-time access to data across an organization's diverse data sources. It uses data virtualization to bridge data across many sources without replication. Denodo offers broad access to structured and unstructured data residing in enterprise, big data, and cloud sources, in both batch and real time.|[Product page](https://www.denodo.com/en)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/denodo.denodo-8_0-std-vm-payg?tab=Overview)<br> | | ![Dimodelo](./media/data-integration/dimodelo-logo.png) |**Dimodelo**<br>Dimodelo Data Warehouse Studio is a data warehouse automation tool for the Azure data platform. Dimodelo enhances developer productivity through a dedicated data warehouse modeling and ETL design tool, pattern-based best practice code generation, one-click deployment, and ETL orchestration. Dimodelo enhances maintainability with change propagation, allows developers to stay focused on business outcomes, and automates portability across data platforms.|[Product page](https://www.dimodelo.com/data-warehouse-studio-for-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dimodelosolutions.dimodeloazurevs)<br> | | ![Fivetran](./media/data-integration/fivetran_logo.png) |**Fivetran**<br>Fivetran helps you centralize data from disparate sources. It features a zero maintenance, zero configuration data pipeline product with a growing list of built-in connectors to all the popular data sources. Setup takes five minutes after authenticating to data sources and target data warehouse.|[Product page](https://www.fivetran.com/partners-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/fivetran.fivetran_data_pipelines?tab=Overview)<br> |
-| ![HVR](./media/data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hvr.hvr-for-azure?tab=Overview)<br>|
+| ![HVR](./media/data-integration/hvr-logo.png) |**HVR**<br>HVR provides a real-time cloud data replication solution that supports enterprise modernization efforts. The HVR platform is a reliable, secure, and scalable way to quickly and efficiently integrate large data volumes in complex environments, enabling real-time data updates, access, and analysis. Global market leaders in various industries trust HVR to address their real-time data integration challenges and revolutionize their businesses. HVR is a privately held company based in San Francisco, with offices across North America, Europe, and Asia.|[Product page](https://www.hvr-software.com/solutions/azure-data-integration/)|
| ![Incorta](./media/data-integration/incorta-logo.png) |**Incorta**<br>Incorta enables organizations to go from raw data to quickly discovering actionable insights in Azure by automating the various data preparation steps typically required to analyze complex data. which. Using a proprietary technology called Direct Data Mapping and Incorta's Blueprints (pre-built content library and best practices captured from real customer implementations), customers experience unprecedented speed and simplicity in accessing, organizing, and presenting data and insights for critical business decision-making.|[Product page](https://www.incorta.com/solutions/microsoft-azure-synapse)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/incorta.incorta_direct_data_platform)<br>| | ![Informatica](./media/data-integration/informatica_logo.png) |**1.Informatica Cloud Services for Azure**<br> Informatica Cloud offers a best-in-class solution for self-service data migration, integration, and management capabilities. Customers can quickly and reliably import, and export petabytes of data to Azure from different kinds of sources. Informatica Cloud Services for Azure provides native, high volume, high-performance connectivity to Azure Synapse, SQL Database, Blob Storage, Data Lake Store, and Azure Cosmos DB. <br><br> **2.Informatica PowerCenter** PowerCenter is a metadata-driven data integration platform that jumpstarts and accelerates data integration projects to deliver data to the business more quickly than manual hand coding. It serves as the foundation for your data integration investments |**Informatica Cloud services for Azure**<br>[Product page](https://www.informatica.com/products/cloud-integration.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.iics-secure-agent)<br><br> **Informatica PowerCenter**<br>[Product page](https://www.informatica.com/products/data-integration/powercenter.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/informatica.powercenter-1041?tab=Overview)<br>| | ![Information Builders](./media/data-integration/informationbuilders_logo.png) |**Information Builders (Omni-Gen Data Management)**<br>Information Builder's Omni-Gen data management platform provides data integration, data quality, and master data management solutions. It makes it easy to access, move, and blend all data no matter the format, location, volume, or latency.|[Product page](https://www.informationbuilders.com/3i-platform) |
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Previously updated : 07/20/2022 Last updated : 08/02/2022
The following table lists the runtime name, Apache Spark version, and release da
| -- | -- | -- | -- | -- | | [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | GA | July 8, 2023 | July 8, 2024 | | [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | LTS | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __LTS<br/>End of Life to be announced__ | __July 29, 2022__ | July 28, 2023 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __July 28, 2023__ |
## Runtime release stages
synapse-analytics Optimize Write For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/optimize-write-for-apache-spark.md
+
+ Title: Using optimize write on Apache Spark to produce more efficient tables
+description: Optimize write is an efficient write feature for Apache Spark
++++ Last updated : 08/03/2022++++
+# The need for optimize write on Apache Spark
+
+Analytical workloads on Big Data processing engines such as Apache Spark perform most efficiently when using standardized larger file sizes. The relation between the file size, the number of files, the number of Spark workers and its configurations, play a critical role on performance. Ingestion workloads into data lake tables may have the inherited characteristic of constantly writing lots of small files; this scenario is commonly known as the "small file problem".
+
+Optimize Write is a Delta Lake on Synapse feature that reduces the number of files written and aims to increase individual file size of the written data. It dynamically optimizes partitions while generating files with a default 128 MB size. The target file size may be changed per workload requirements using [configurations](apache-spark-azure-create-spark-configuration.md).
+
+This feature achieves the file size by using an extra data shuffle phase over partitions, causing an extra processing cost while writing the data. The small write penalty should be outweighed by read efficiency on the tables.
+
+> [!NOTE]
+> - Optimize write is available as a Preview feature.
+> - It is available on Synapse Pools for Apache Spark versions 3.1 and 3.2.
+
+## Benefits of Optimize Writes
+
+* It's available on Delta Lake tables for both Batch and Streaming write patterns.
+* There's no need to change the ```spark.write``` command pattern. The feature is enabled by a configuration setting or a table property.
+* It reduces the number of write transactions as compared to the OPTIMIZE command.
+* OPTIMIZE operations will be faster as it will operate on fewer files.
+* VACUUM command for deletion of old unreferenced files will also operate faster.
+* Queries will scan fewer files with more optimal file sizes, improving either read performance or resource usage.
+
+## Optimize write usage scenarios
+
+### When to use it
+
+* Delta lake partitioned tables subject to write patterns that generate suboptimal (less than 128 MB) or non-standardized files sizes (files with different sizes between itself).
+* Repartitioned data frames that will be written to disk with suboptimal files size.
+* Delta lake partitioned tables targeted by small batch SQL commands like UPDATE, DELETE, MERGE, CREATE TABLE AS SELECT, INSERT INTO, etc.
+* Streaming ingestion scenarios with append data patterns to Delta lake partitioned tables where the extra write latency is tolerable.
+
+### When to avoid it
+
+* Non partitioned tables.
+* Use cases where extra write latency isn't acceptable.
+* Large tables with well defined optimization schedules and read patterns.
+
+## How to enable and disable the optimize write feature
+
+The optimize write feature is disabled by default.
+
+Once the configuration is set for the pool or session, all Spark write patterns will use the functionality.
+
+To use the optimize write feature, enable it using the following configuration:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.set("spark.microsoft.delta.optimizeWrite.enabled", "true")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.optimizeWrite.enabled` = true
+```
+
+To check the current configuration value, use the command as shown below:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.get("spark.microsoft.delta.optimizeWrite.enabled")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.optimizeWrite.enabled`
+```
+
+To disable the optimize write feature, change the following configuration as shown below:
+
+1. Scala and PySpark
+
+```scala
+spark.conf.set("spark.microsoft.delta.optimizeWrite.enabled", "false")
+```
+
+2. Spark SQL
+
+```SQL
+SET `spark.microsoft.delta.optimizeWrite.enabled` = false
+```
+
+## Controlling optimize write using table properties
+
+### On new tables
+
+1. SQL
+
+```SQL
+CREATE TABLE <table_name> TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true)
+```
+
+2. Scala
+
+Using the [DeltaTableBuilder API](https://docs.delta.io/latest/api/scala/io/delta/tables/DeltaTableBuilder.html):
+
+```scala
+val table = DeltaTable.create()
+ .tableName("<table_name>")
+ .addColumnn("<colName>", <dataType>)
+ .location("<table_location>")
+ .property("delta.autoOptimize.optimizeWrite", "true")
+ .execute()
+```
+
+### On existing tables
+
+1. SQL
+
+```SQL
+ALTER TABLE <table_name> SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true)
+```
+
+2. Scala
+
+Using the [DeltaTableBuilder API](https://docs.delta.io/latest/api/scala/io/delta/tables/DeltaTableBuilder.html)
+
+```scala
+val table = DeltaTable.replace()
+ .tableName("<table_name>")
+ .location("<table_location>")
+ .property("delta.autoOptimize.optimizeWrite", "true")
+ .execute()
+```
+
+## How to get & change the current max file size configuration for Optimize Write
+
+To get the current config value, use the bellow commands. The default is 128 MB.
+
+ 1. Scala and PySpark
+
+```scala
+spark.conf.get("spark.microsoft.delta.optimizeWrite.binSize")
+```
+
+2. SQL
+
+```SQL
+SET `spark.microsoft.delta.optimizeWrite.binSize`
+```
+
+- To change the config value
+
+1. Scala and PySpark
+
+```scala
+spark.conf.set("spark.microsoft.delta.optimizeWrite.binSize", "134217728")
+```
+
+2. SQL
+
+```SQL
+SET `spark.microsoft.delta.optimizeWrite.binSize` = 134217728
+```
+
+## Next steps
+
+ - [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
+ - [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md).
+ - [Create Apache Spark job definition in Azure Studio](./apache-spark-job-definitions.md).
+
synapse-analytics Tutorial Spark Pool Filesystem Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.md
Last updated 11/02/2021--++
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
Last updated 11/02/2021--++
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
Title: Customize feed for Azure Virtual Desktop users - Azure
description: How to customize feed for Azure Virtual Desktop users with PowerShell cmdlets. Previously updated : 09/14/2019 Last updated : 08/16/2020
You can customize the feed so the RemoteApp and remote desktop resources appear
This article assumes you've already downloaded and installed the Azure Virtual Desktop PowerShell module. If you haven't, follow the instructions in [Set up the PowerShell module](powershell-module.md).
+## Customize the display name for a session host
+
+You can change the display name for a remote desktop for your users by setting its session host friendly name. By default, the session host friendly name is empty, so users only see the app name. You can set the session host friendly name using REST API.
+
+>[!NOTE]
+>The following instructions only apply to personal desktops, not pooled desktops. Also, personal host pools only allow and support desktop app groups.
+
+To add or change a session host's friendly name, use the [Session Host - Update REST API](/rest/api/desktopvirtualization/session-hosts/update?tabs=HTTP) and update the *properties.friendlyName* parameter with a REST API request.
+ ## Customize the display name for a RemoteApp You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
To assign a friendly name to the remote desktop resource, run the following Powe
Update-AzWvdDesktop -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname> -Name <applicationname> -FriendlyName <newfriendlyname> ```
-## Customize a display name in Azure portal
+## Customize a display name in the Azure portal
You can change the display name for a published remote desktop by setting a friendly name using the Azure portal.
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
To publish resources to users, you must assign them to app groups. When assignin
- The application crashes - Other negative effects on end-user experience and session performance - A user can be assigned to multiple app groups within the same host pool, and their feed will be an accumulation of both app groups.
+- Personal host pools only allow and support RemoteApp app groups.
## Workspaces
virtual-desktop Whats New Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-azure-monitor.md
Title: What's new in Azure Monitor for Azure Virtual Desktop?
description: New features and product updates in Azure Monitor for Azure Virtual Desktop. Previously updated : 07/09/2021 Last updated : 08/16/2022
For example, a release with a version number of 1.2.31 is on the first major rel
When one of the numbers is increased, all numbers after it must change, too. One release has one version number. However, not all version numbers track releases. Patch numbers can be somewhat arbitrary, for example.
+## Version 1.2.2
+
+This update was released in July 2022 and has the following changes:
+
+- Updated checkpoint queries for LaunchExecutable.
+
+## Version 1.2.1
+
+This update was released in June 2022 and has the following changes:
+
+- Updated templates for Configuration Workbook to be available via the gallery rather than external GitHub.
+
+## Version 1.2.0
+
+This update was released in May 2022 and has the following changes:
+
+- Updated language for connection performance to "time to be productive" for clarity.
+
+- Improved and expanded **Connection Details** flyout panel with additional information on connection history for selected users.
+
+- Added a fix for duplication of some data.
+ ## Version 1.1.10 This update was released in February 2022 and has the following changes:
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
To create a scale set using an Azure template, make sure the API version of the
```json "publicIpAddressConfiguration": { "name": "pub1",
+ "sku" {
+ "name": "Standard"
+ },
"properties": { "idleTimeoutInMinutes": 15 } } ```
-Note when virtual machine scale sets with public IPs per instance are created with a load balancer in front, the SKU of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard).
+Note when virtual machine scale sets with public IPs per instance are created with a load balancer in front, the of the instance IPs is determined by the SKU of the Load Balancer (i.e. Basic or Standard). If the virtual machine scale set is created without a load balancer, the SKU of the instance IPs can be set directly by using the SKU section of the template as shown above.
Example template using a Basic Load Balancer: [vmss-public-ip-linux](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vmss-public-ip-linux)
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
As a new rollout is triggered every month, a VM will receive at least one patch
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk-g2 | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
| MicrosoftWindowsServer | WindowsServer | 2022-datacenter | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-g2 | | MicrosoftWindowsServer | WindowsServer | 2022-datacenter-core |
virtual-machines Tutorial Devops Azure Pipelines Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md
Title: Tutorial - Configure rolling deployments for Azure Linux virtual machines
-description: In this tutorial, you learn how to set up a continuous deployment (CD) pipeline. This pipeline incrementally updates a group of Azure Linux virtual machines using the rolling deployment strategy.
+ Title: Configure rolling deployments for Azure Linux virtual machines
+description: Learn how to set up a classic release pipeline and deploy your application to Linux virtual machines using the rolling deployment strategy.
tags: azure-devops-pipelines- azure-pipelines Previously updated : 4/10/2020 Last updated : 08/15/2022 -
-#Customer intent: As a developer, I want to learn about CI/CD features in Azure so that I can use Azure DevOps services like Azure Pipelines to build and deploy my applications automatically.
-# Tutorial - Configure the rolling deployment strategy for Azure Linux virtual machines
+# Configure the rolling deployment strategy for Azure Linux virtual machines
**Applies to:** :heavy_check_mark: Linux VMs
-Azure DevOps is a built-in Azure service that automates each part of the DevOps process for any Azure resource. Whether your app uses virtual machines, web apps, Kubernetes, or any other resource, you can implement infrastructure as code (IaC), continuous integration, continuous testing, continuous delivery, and continuous monitoring with Azure and Azure DevOps.
-
-![The Azure portal with Azure DevOps selected under Services](media/tutorial-devops-azure-pipelines-classic/azdevops-view.png)
+Azure Pipelines provides a fully featured set of CI/CD automation tools for deployments to virtual machines. This article will show you how to set up a classic release pipeline that uses the rolling strategy to deploy your web applications to Linux virtual machines.
-## Infrastructure as a service (IaaS) - Configure CI/CD
+## Rolling deployments
-Azure Pipelines provides a fully featured set of CI/CD automation tools for deployments to virtual machines. You can configure a continuous-delivery pipeline for an Azure VM from the Azure portal.
-
-This article shows how to set up a CI/CD pipeline for rolling multimachine deployments from the Azure portal. The Azure portal also supports other strategies like [canary](./tutorial-azure-devops-canary-strategy.md) and [blue-green](./tutorial-azure-devops-blue-green-strategy.md).
+In each iteration, a rolling deployment replaces instances of an application's previous version. It replaces them with instances of the new version on a fixed set of machines (rolling set). The following walk-through shows how to configure a rolling update to virtual machines.
-### Configure CI/CD on virtual machines
+Using **Continuous-delivery**, you can configure rolling updates to your virtual machines within the Azure portal.
-You can add virtual machines as targets to a [deployment group](/azure/devops/pipelines/release/deployment-groups). You can then target them for multimachine updates. After you deploy to machines, view **Deployment History** within a deployment group. This view lets you trace from VM to the pipeline and then to the commit.
+1. Sign in to [Azure portal](https://portal.azure.com/) and navigate to a virtual machine.
-### Rolling deployments
+1. Select **Continuous delivery**, and then select **Configure**.
-In each iteration, a rolling deployment replaces instances of an application's previous version. It replaces them with instances of the new version on a fixed set of machines (rolling set). The following walk-through shows how to configure a rolling update to virtual machines.
+ :::image type="content" source="media/tutorial-devops-azure-pipelines-classic/azure-devops-configure.png" alt-text="A screenshot showing the continuous delivery settings.":::
-Using the continuous-delivery option, you can configure rolling updates to your virtual machines within the Azure portal. Here is the step-by-step walk-through:
+1. Select your **Azure DevOps Organization** and your **Project** from the dropdown menu or **Create** a new one.
-1. Sign in to the Azure portal and navigate to a virtual machine.
-1. In the leftmost pane of the VM settings, select **Continuous delivery**. Then select **Configure**.
+1. Select your **Deployment group** from the dropdown menu or **Create** a new one.
- ![The Continuous delivery pane with the Configure button](media/tutorial-devops-azure-pipelines-classic/azure-devops-configure.png)
+1. Select your **Build pipeline**.
-1. In the configuration panel, select **Azure DevOps Organization** to choose an existing account or create a new one. Then select the project under which you want to configure the pipeline.
+1. Select **Deployment strategy**, and then select **Rolling**.
- ![The Continuous delivery panel](media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling.png)
+ :::image type="content" source="media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling.png" alt-text="A screenshot showing how to configure a rolling deployment strategy.":::
-1. A deployment group is a logical set of deployment target machines that represent the physical environments. Dev, Test, UAT, and Production are examples. You can create a new deployment group or select an existing one.
-1. Select the build pipeline that publishes the package to be deployed to the virtual machine. The published package should have a deployment script named deploy.ps1 or deploy.sh in the deployscripts folder in the package's root folder. The pipeline runs this deployment script.
-1. In **Deployment strategy**, select **Rolling**.
-1. Optionally, you can tag each machine with its role. The tags "web" and "db" are examples. These tags help you target only VMs that have a specific role.
-1. Select **OK** to configure the continuous-delivery pipeline.
-1. After configuration finishes, you have a continuous-delivery pipeline configured to deploy to the virtual machine.
+1. Optionally, you can tag each machine with its role such as *web* or *db*. These tags help you target only VMs that have a specific role.
- ![The Continuous delivery panel showing Deployment history](media/tutorial-devops-azure-pipelines-classic/azure-devops-deployment-history.png)
+1. Select **OK** to configure the continuous delivery pipeline.
-1. The deployment details for the virtual machine are displayed. You can select the link to go to the pipeline, **Release-1** to view the deployment, or **Edit** to modify the release-pipeline definition.
+1. After completion, your continuous delivery pipeline should look similar to the following.
-1. If you're configuring multiple VMs, repeat steps 2 through 4 for other VMs to add to the deployment group. If you select a deployment group that already has a pipeline run, the VMs are just added to the deployment group. No new pipelines are created.
-1. After configuration is done, select the pipeline definition, navigate to the Azure DevOps organization, and select **Edit** for the release pipeline.
+ :::image type="content" source="media/tutorial-devops-azure-pipelines-classic/azure-devops-deployment-history.png" alt-text="A screenshot showing the continuous delivery pipeline.":::
- ![Editing the rolling pipeline](media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling-pipeline.png)
+1. If you want to configure multiple VMs, repeat steps 2 through 4 for the other VMs. If you use the same deployment group that already has a configured pipeline, the new VMs will just be added to the deployment group and no new pipelines will be created.
-1. Select **1 job, 1 task** in the **dev** stage. Select the **Deploy** phase.
+1. Select the link to navigate to your pipeline, and then select**Edit** to modify the pipeline definition.
- ![Rolling pipeline tasks with the Deploy task selected](media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling-pipeline-tasks.png)
+ :::image type="content" source="media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling-pipeline.png" alt-text="A screenshot showing the pipeline definition.":::
-1. From the rightmost configuration pane, you can specify the number of machines that you want to deploy in parallel in each iteration. If you want to deploy to multiple machines at a time, you can specify the number of machines as a percentage by using the slider.
+1. Select the tasks in the **dev** stage to navigate to the pipeline tasks, and then select **Deploy**.
-1. The Execute Deploy Script task by default executes the deployment script deploy.ps1 or deploy.sh. The script is in the deployscripts folder in the root folder of the published package.
+ :::image type="content" source="media/tutorial-devops-azure-pipelines-classic/azure-devops-rolling-pipeline-tasks.png" alt-text="A screenshot showing the pipeline tasks.":::
- ![The Artifacts pane showing deploy.sh in the deployscripts folder](media/tutorial-deployment-strategy/package.png)
+1. You can specify the number of target machines to deploy to in parallel in each iteration. If you want to deploy to multiple machines, you can specify the number of machines as a percentage by using the slider.
-## Other deployment strategies
+1. The **Execute Deploy Script** task will execute the deployment script located in the root folder of the published artifacts.
-- [Configure the canary deployment strategy](./tutorial-azure-devops-canary-strategy.md)-- [Configure the blue-green deployment strategy](./tutorial-azure-devops-blue-green-strategy.md)
+ :::image type="content" source="media/tutorial-deployment-strategy/package.png" alt-text="A screenshot showing the published artifacts.":::
-## Azure DevOps Projects
+## Resources
-You can get started with Azure easily. With Azure DevOps Projects, start running your application on any Azure service in just three steps by selecting:
+- [Deploy to Azure virtual machines with Azure DevOps](../../devops-project/azure-devops-project-vms.md)
+- [Deploy to Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset.md)
-- An application language-- A runtime-- An Azure service
-
-[Learn more](https://azure.microsoft.com/features/devops-projects/).
-
-## Additional resources
+## Related articles
-- [Deploy to Azure virtual machines by using Azure DevOps Projects](../../devops-project/azure-devops-project-vms.md)-- [Implement continuous deployment of your app to an Azure virtual machine scale set](/azure/devops/pipelines/apps/cd/azure/deploy-azure-scaleset)
+- [Configure the canary deployment strategy](./tutorial-azure-devops-canary-strategy.md)
+- [Configure the blue-green deployment strategy](./tutorial-azure-devops-blue-green-strategy.md)
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
+
+ Title: Create an Azure virtual machine with a dual-stack network - Azure portal
+
+description: In this article, learn how to use the Azure portal to create a virtual machine with a dual-stack virtual network in Azure.
+++++ Last updated : 08/17/2022+++
+# Create an Azure Virtual Machine with a dual-stack network using the Azure portal
+
+In this article, you'll create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a virtual network
+
+In this section, you'll create a dual-stack virtual network for the virtual machine.
+
+1. Sign-in to the [Azure portal](https://https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create virtual network**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> In **Name**, enter **myResourceGroup**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet**. |
+ | Region | Select **East US 2**. |
+
+5. Select the **IP Addresses** tab, or **Next: IP Addresses**.
+
+6. Leave the default IPv4 address space of **10.1.0.0/16**. If the default is absent or different, enter an IPv4 address space of **10.1.0.0/16**.
+
+7. Select the **Add IPv6 address space** box.
+
+8. In **IPv6 address space**, edit the default address space and change its value to **2404:f800:8000:122::/63**.
+
+9. To add an IPv6 subnet, select **default** under **Subnet name**. If default is missing, select **+ Add subnet**.
+
+10. In **Subnet name**, enter **myBackendSubnet**.
+
+11. Leave the default IPv4 subnet of **10.1.0.0/24** in **Subnet address range**. Enter **10.1.0.0/24** if missing.
+
+12. Select the box next to **Add IPv6 address space**.
+
+13. In **IPv6 address range**, enter **2404:f800:8000:122::/64**.
+
+14. Select **Save**. If creating a subnet, select **Add**.
+
+15. Select the **Review + create**.
+
+16. Select **Create**.
+
+## Create public IP addresses
+
+You'll create two public IP addresses in this section, IPv4 and IPv6.
+
+1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
+
+2. Select **+ Create**.
+
+3. Enter or select the following information in **Create public IP address**.
+
+ | Setting | Value |
+ | - | -- |
+ | IP version | Select **Both**. |
+ | SKU | Leave the default of **Standard**. |
+ | **Ipv4 IP Address Configuration** | |
+ | Name | Enter **myPublicIP-IPv4**. |
+ | Routing preference | Leave the default of **Microsoft network**. |
+ | Idle timeout (minutes) | Leave the default of **4**. |
+ | **IPv6 IP Address Configuration** | |
+ | Name | Enter **myPublicIP-IPv6**. |
+ | Idle timeout (minutes) | Leave the default of **4**. |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | Location | Select **East US 2**. |
+ | Availability zone | Select **Zone redundant**. |
+
+4. Select **Create**.
+
+## Create network security group
+
+You'll create a network security group to allow SSH connections to the virtual machine.
+
+1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results.
+
+2. Select **+Create**.
+
+3. Enter or select the following information in the **Basics** tab.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myNSG**. |
+ | Region | Select **East US 2**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+### Create network security group rules
+
+In this section, you'll create the inbound rule.
+
+1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results.
+
+2. In **Network security groups**, select **myNSG**.
+
+3. In **Settings**, select **Inbound security rules**.
+
+4. Select **+ Add**.
+
+5. In **Add inbound security rule**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Source | Leave the default of **Any**. |
+ | Source port ranges | Leave the default of *. |
+ | Destination | Leave the default of **Any**. |
+ | Service | Select **SSH**. |
+ | Action | Leave the default of **Allow**. |
+ | Priority | Enter **200**. |
+ | Name | Enter **myNSGRuleSSH**. |
+
+6. Select **Add**.
+
+## Create virtual machine
+
+In this section, you'll create the virtual machine and its supporting resources.
+
+### Create network interface
+
+You'll create a network interface and attach the public IP addresses you created previously.
+
+1. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create network interface, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myNIC1**. |
+ | Region | Select **East US 2**. |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **myBackendSubnet (10.1.0.0/24,2404:f800:8000:122:/64)**. |
+ | Network security group | Select **myNSG**. |
+ | Private IP address (IPv6) | Select the box. |
+ | IPv6 name | Enter **Ipv6config**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+### Associate public IP addresses
+
+You'll associate the IPv4 and IPv6 addresses you created previously to the network interface.
+
+1. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
+
+2. Select **myNIC1**.
+
+3. Select **IP configurations** in **Settings**.
+
+4. In **IP configurations**, select **Ipv4config**.
+
+5. In **Ipv4config**, select **Associate** in **Public IP address**.
+
+6. Select **myPublicIP-IPv4** in **Public IP address**.
+
+7. Select **Save**.
+
+8. Close **Ipv4config**.
+
+9. In **IP configurations**, select **ipconfig-ipv6**.
+
+10. In **Ipv6config**, select **Associate** in **Public IP address**.
+
+11. Select **myPublicIP-IPv6** in **Public IP address**.
+
+12. Select **Save**.
+
+### Create virtual machine
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** then **Azure virtual machine**.
+
+3. In the **Basics** tab, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **East US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Ubuntu Server 20.04 LTS - Gen2**. |
+ | Size | Select the default size. |
+ | **Administrator account** | |
+ | Authentication type | Select **SSH public key**. |
+ | Username | Enter a username. |
+ | SSH public key source | Select **Generate new key pair**. |
+ | Key pair name | Enter **mySSHKey**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select the **Networking** tab, or **Next: Disks** then **Next: Networking**.
+
+5. Enter or select the following information in the **Networking** tab.
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **myBackendSubnet (10.1.0.0/24,2404:f800:8000:122:/64)**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **None**. |
+
+6. Select **Review + create**.
+
+7. Select **Create**.
+
+8. **Generate new key pair** will appear. Select **Download private key and create resource**.
+
+9. The private key will download to your local computer. Copy the private key to a directory on your computer. In the following example, it's **~/.ssh**.
+
+10. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+11. Select **myVM**.
+
+12. Stop **myVM**.
+
+### Network interface configuration
+
+A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you'll remove this default network interface and attach the network interface you created previously.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM**.
+
+3. Select **Networking** in **Settings**.
+
+4. Select **Attach network interface**.
+
+5. Select **myNIC1** that you created previously.
+
+6. Select **OK**.
+
+7. Select **Detach network interface**.
+
+8. The name of your default network interface will be **myvmxx**, with xx a random number. In this example, it's **myvm281**. Select **myvm281** in **Detach network interface**.
+
+9. Select **OK**.
+
+10. Return to the **Overview** of **myVM** and start the virtual machine.
+
+11. The default network interface can be safely deleted.
+
+## Test SSH connection
+
+You'll connect to the virtual machine with SSH to test the IPv4 public IP address.
+
+1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
+
+2. Select **myPublicIP-IPv4**.
+
+3. The public IPv4 address is in the **Overview** in **IP address**. In this example it's, **20.22.46.19**.
+
+4. Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine. Replace **`azureuser`** with the username you chose during virtual machine creation. The **`-i`** is the path to the private key that you downloaded earlier. In this example, it's **~/.ssh/mySSHKey.pem**.
+
+```bash
+ssh -i ~/.ssh/mySSHkey.pem azureuser@20.22.46.19
+```
+
+## Clean up resources
+
+When your finished with the resources created in this article, delete the resource group and all of the resources it contains:
+
+1. In the search box at the top of the portal, enter **myResourceGroup**. Select **myResourceGroup** in the search results in **Resource groups**.
+
+2. Select **Delete resource group**.
+
+3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**.
+
+## Next steps
+
+In this article, you learned how to create an Azure Virtual machine with a dual-stack network.
+
+For more information about IPv6 and IP addresses in Azure, see:
+
+- [Overview of IPv6 for Azure Virtual Network.](ipv6-overview.md)
+
+- [What is Azure Virtual Network IP Services?](ip-services-overview.md)
++
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2022 Last updated : 08/16/2022
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **ApiManagement** | Management traffic for Azure API Management-dedicated deployments. <br/><br/>**Note**: This tag represents the Azure API Management service endpoint for control plane per region. The tag enables customers to perform management operations on the APIs, Operations, Policies, NamedValues configured on the API Management service. | Inbound | Yes | Yes | | **ApplicationInsightsAvailability** | Application Insights Availability. | Inbound | No | No | | **AppConfiguration** | App Configuration. | Outbound | No | No |
-| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps. | Outbound | Yes | Yes |
+| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps.<br/><br/>**Note**: This tag does not include IP addresses assigned when using IP-based SSL (App-assigned address). | Outbound | Yes | Yes |
| **AppServiceManagement** | Management traffic for deployments dedicated to App Service Environment. | Both | No | Yes | | **AzureActiveDirectory** | Azure Active Directory. | Outbound | No | Yes | | **AzureActiveDirectoryDomainServices** | Management traffic for deployments dedicated to Azure Active Directory Domain Services. | Both | No | Yes |
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Deploying services within a virtual network provides the following capabilities:
| Web | [API Management](../api-management/api-management-using-with-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Web Apps](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[App Service Environment](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>|Yes <br/> Yes <br/> Yes <br/> Yes | Hosted | [Azure Dedicated HSM](../dedicated-hsm/index.yml?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>|Yes <br/> Yes <br/> | Azure Spring Apps | [Deploy in Azure virtual network (VNet injection)](../spring-apps/how-to-deploy-in-azure-virtual-network.md)<br/>| Yes <br/>
-| | |
+| Virtual desktop infrastructure| [Azure Lab Services](../lab-services/how-to-connect-vnet-injection.md)<br/>| Yes <br/>
<sup>1</sup> 'Dedicated' implies that only service specific resources can be deployed in this subnet and cannot be combined with customer VM/VMSSs <br/> <sup>2</sup> It is recommended as a best practice to have these services in a dedicated subnet, but not a mandatory requirement imposed by the service.
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-udr-overview.md
Each route contains an address prefix and next hop type. When traffic leaving a
|Default|172.16.0.0/12 |None | |Default|192.168.0.0/16 |None | |Default|100.64.0.0/10 |None |
+|Default|172.16.0.0/12 |None |
The next hop types listed in the previous table represent how Azure routes traffic destined for the address prefix listed. Explanations for the next hop types follow:
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
* Contact the product team to take part in the gated public preview. In this preview, traffic between the 2 hubs traverses through the Azure Virtual WAN router in each hub and uses a hub-to-hub path instead of the ExpressRoute path (which traverses through the Microsoft Edge routers/MSEE). To use this feature during preview, email **previewpreferh2h@microsoft.com** with the Virtual WAN IDs, Subscription ID, and the Azure region. Expect a response within 48 business hours (Monday-Friday) with confirmation that the feature is enabled.
+### When there's an ExpressRoute circuit connected as a bow-tie to a vWAN hub and a non-vWAN (customer-managed) VNet, what is the path for the non-vWAN VNET to reach a VNet directly connected to the vWAN hub?
+
+The current behavior is to prefer the ExpressRoute circuit path for non-vWAN VNet to vWAN VNet connectivity. However, this isn't encouraged in a Virtual WAN setup. To resolve this, you can [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non-vWAN VNet to the vWAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+ ### Can hubs be created in different resource group in Virtual WAN? Yes. This option is currently available via PowerShell only. The Virtual WAN portal requires that the hubs are in the same resource group as the Virtual WAN resource itself.
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. Azure-wide Cloud Services-based infrastructure is deprecating. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of up to 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update unless one of the following is true:
+
+1. The Virtual WAN hub is in a different region than one or more spoke VNets. In this case, you will have to delete and recreate these respective VNet connections to maintain connectivity.
+1. You have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet. In this case, you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+
+If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
+ ### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway?